Setting up mfsBSD for receiving ZFS snapshots on systems with low memory

I recently had a need to boot into a fresh server (a VirtualBox VM, actually) with FreeBSD in order to partition the disk and make it ready to restore another machine onto it.

Of course I turned to mfsBSD. I downloaded the ISO, started the zfs recv, and found sometime later I found my VM had lost its disk and there were messages that looked like the VM had run out of memory.
No problem, let’s spin our own mfsBSD ISO!

I did the following on my FreeBSD laptop as root.
Firstly, download a FreeBSD ISO (the CD is fine). If you are using this to restore a snapshot of another machine make sure you download the ISO that matches the release the snapshot was taken on!
I did not, initially, and came up with some obscure boot messages.

So, let’s take a look at the commands I used:

root@bil-bsd # cd /var/tmp

# Fetch the FreeBSD ISO
root@bil-bsd # fetch https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.2/FreeBSD-11.2-RELEASE-amd64-disc1.iso

# Mount the ISO
root@bil-bsd # mdconfig -a -t vnode -u 10 -f /var/tmp/FreeBSD-11.2-RELEASE-amd64-disc1.iso
root@bil-bsd # mount_cd9660 /dev/md10 /mnt/

# Clone the mfsbsd repo
root@bil-bsd # git clone https://github.com/mmatuska/mfsbsd.git
root@bil-bsd # cd mfsbsd

Now, we want to copy one of the sample files and modify it:

root@bil-bsd # cd conf
root@bil-bsd # cp loader.conf.sample loader.conf
root@bil-bsd # cat << EOF >> loader.conf
vm.kmem_size="330M"
vm.kmem_size_max="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
EOF

I found these settings in the FreeBSD Handbook’s ZFS Advanced Topics page under “Loader Tunables”.

Now we can make our ISO:

root@bil-bsd # cd ..
root@bil-bsd # make iso BASE=/mnt/usr/freebsd-dist RELEASE=11.2-RELEASE

For full build information, check out the Build page in the mfsBSD repo.

I copied this back to my Mac, booted from it and completed the post linked to at the beginning of this one.

Restore FreeBSD from a ZFS Snapshot

alt text

This was a beautiful sight this morning. The image above is a VirtualBox console showing a booted FreeBSD system. That system has been restored from a ZFS snapshot taken last November. So, how did we do it?

Creating the Snapshot

First, let’s take a look at actually creating the snapshot:

root@manaha:~ # zfs snapshot -r zroot@2018-11-06
root@manaha:~ # zfs send -Rv zroot@2018-11-06 | gzip | ssh backup@home "cat > /data/backup/manaha/2018-11-06.zfs.gz"  

This does a few things:

  1. Snapshot my root dataset recursivly
  2. Use zfs send to create a stream of the zoot dataset and everything below it
  3. Pipe to gzip which compresses the stream
  4. SSH to a Raspberry Pi at home and cat the compressed stream into a file

The file mentioned above is simply a gzipped ZFS send stream, I’ve opted to use the extension .zfs.gz to remind myself of this, but of course .flurb would have worked equally as well.

We now have a snapshot of our system.

Restoring the Snapshot

Creating snapshots is all well and good, but what if we can’t restore them?
I knew from previous experiments that I could zfs receive this file onto my FreeBSD laptop, but what if I did some major damage to my server and needed to restore the whole thing?
It was getting to the point where a reinstallation and selective backup was looking like the most reliable (if not time consuming) method for restoring a complete server meltdown.

So how can we restore this image onto a clean machine?

First we need to boot a clean machine, I’m going to use mfsBSD which I’ll describe in another post - this will be an important post to ensure successful restoration on servers with not much RAM (4GB or less).

Once we have a mfsBSD image, boot into it and enable root access via SSH (I’ll leave you to figure out the best way for you to do this).
Being able to SSH into your mfsBSD instance as root will make life much easier, mainly because you can copy and paste which you can’t often do in consoles.

So now we have convenient access, we need to partition the disks:

# First, find out what your drive is called
#   (mine is ada0 in VirtualBox)
# Second, create a partition scheme
gpart create -s gpt ada0

# Third, partition the disk
gpart add -a 1m -t freebsd-boot -s 512k -l boot ada0
gpart add -a 1m -t freebsd-swap -s 2g -l swap0 ada0
gpart add -a 1m -t freebsd-zfs -l zfs0 ada0

# Lastly, install the ZFS bootloader
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

Note: The partitioning above matches my VPS, and yours should too. Make sure you have a note somewhere of it, as that won’t be backed up! My note is here:

gpart show
=>       40  165674928  da0  GPT  (79G)
         40       1024    1  freebsd-boot  (512K)
       1064        984       - free -  (492K)
       2048    4194304    2  freebsd-swap  (2.0G)
    4196352  161478616    3  freebsd-zfs  (77G)

Now we need to create a zpool for us to receive that compressed send stream:

zpool create -o altroot=/mnt zroot

The -o altroot=/mnt allows us to mount the zpool and receive the send stream without mucking around with our current environments root file system! By default it would want to mount on /

Now we have all of the required pieces:

  • root access via SSH
  • partitioned disks
  • a zpool waiting to receive

Let’s log into the Raspberry Pi and get this VPS restored!

cat /data/backup/manaha/2018-11-06.zfs.gz | gunzip | ssh root@172.16.0.224 zfs receive -vF zroot

Sit and wait. Once it’s completed you need to get log back into the server and set the bootfs property, for me it was like this:

zpool set bootfs=/zroot/ROOT/default zroot

Then eject the mfsBSD ISO and reboot into your restored system.

acme.sh revisited: ECC & Wildcards

A while ago I wrote about using acme.sh to automate my HTTPS certificates.
In the post I used a domain (bnix.club) along with a number of specific subdomains (“logs.bnix.club”, “f.bnix.club”, “www.bnix.club”).

Today I wanted to add a subdomain to an existing domain: manaha.co.uk.
This has a number of subdomains, so rather than adding a new one I decided to create a wildcard certificate.
While browsing the documentation for acme.sh, I came across ECC certificates, and thought that if I was recreating a certificate that I could use this too.

The process is very similar to the previous post, I’m putting this information here since it is a little different (different enough that I’ll forget what I did in the future…)
I will cut out the output from each command this time, since it will largely be the same.

Note: All steps below were taken as the acme user.

0. Clean environment

Before I started this process, I cleaned out the old certificates and settings

$ acme.sh --remove -d manaha.co.uk
$ rm -rf /usr/local/etc/ssl/manaha/*
$ rm -rf ~/certs/manaha.co.uk/

1. Issuing an ECC Wildcard certificate

$ acme.sh --issue --dns dns_linode -d 'manaha.co.uk' -d '*.manaha.co.uk' --keylength ec-256

This issues a new certificate to manaha.co.uk, and all subdomains (wildcard - see the * in the second domain declaration). It uses Linode DNS to verify I have control of the domains. The --keylength ec-256 part tells acme.sh to create an ECDSA certificate (prime256v1, “ECDSA P-256”).

2. Installing the certificate

This uses the same mechanisms as in the previous post, so make sure you read that if you’re following along:

$ acme.sh --install-cert --ecc -d 'manaha.co.uk' -d '*.manaha.co.uk' --key-file /usr/local/etc/ssl/manaha/privkey.pem --fullchain-file /usr/local/etc/ssl/manaha/fullchain.pem --reloadcmd "sleep 65 && touch /var/db/acme/.restart_nginx"

The only real difference between this post and the last one is the --ecc, this tells acme.sh that the certificate being used is ECDSA.

3. Renewing certificates

This was already done for me, and it’s documented in the original post.

FreeBSD Ports unable to use multiple Github repos with the same name

I maintain the hugo (www/gohugo) FreeBSD Port. In a recent release (0.51) a situation arose where the dependency “mitchellh/mapstructure” required a fix. The Github repository for the dependency was forked to “bep/mapstructure” and a Pull Request to the original repository made. The Pull Request wasn’t pulled into the original repo in time for the 0.51 release (and still haven’t at time of writing), so the developers have instead specified both the original and forked repos as dependencies.

When I got a notification that hugo had been updated, I thought to myself “ooo, new stuff!”, and I set about updating the 0.50 Makefile.

Sometimes the changes are very simple, all I have to do is modify the DISTVERSION and COMMIT_ID, and test the build. Other times, dependencies change and I must redefine GH_TUPLE with those changes - thankfully for this I have a small hacky shell script to make things easier.
The most challenging changes I’ve come across yet are where the project has re-imagined how it defines dependencies (hugo has gone from a vendored JSON file, to godep, to gomod).

This release required the usual changes, plus a change to dependencies, including adding this new forked repo.
At first I didn’t notice it. Perhaps I should be more vigilant, but if everything works I am generally happy. My script parsed the go.mod file, gave me a list to put into the GH_TUPLE variable; I did a make makesum with no problems, then I tried to do a make and got the following output:

===>  Building for gohugo-0.51
hugolib/page_ref.go:21:2: cannot find package "github.com/bep/mapstructure" in any of:
	/usr/local/go/src/github.com/bep/mapstructure (from $GOROOT)
	/usr/ports/www/gohugo/work/hugo-0.51/src/github.com/bep/mapstructure (from $GOPATH)
minifiers/minifiers.go:27:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2 expects import "github.com/tdewolff/minify"
minifiers/minifiers.go:28:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/css expects import "github.com/tdewolff/minify/css"
minifiers/minifiers.go:29:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/html expects import "github.com/tdewolff/minify/html"
minifiers/minifiers.go:30:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/js expects import "github.com/tdewolff/minify/js"
minifiers/minifiers.go:31:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/json expects import "github.com/tdewolff/minify/json"
minifiers/minifiers.go:32:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/svg expects import "github.com/tdewolff/minify/svg"
minifiers/minifiers.go:33:2: code in directory /usr/ports/www/gohugo/work/hugo-0.51/src/github.com/tdewolff/minify/v2/xml expects import "github.com/tdewolff/minify/xml"
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/www/gohugo
*** Error code 1

Stop.
make: stopped in /usr/ports/www/gohugo

Hmm, odd! It seems that “bep/mapstructure” is missing…Let’s have a look in the Makefile to double check…

# grep -n mapstructure Makefile
31:		bep:mapstructure:bb74f1d:mapstructure/src/github.com/bep/mapstructure \
63:		mitchellh:mapstructure:v1.0.0:mapstructure/src/github.com/mitchellh/mapstructure \

It is there, as is “mitchellh/mapstructure”. Very odd! Let’s have a look in distinfo:

# grep -n mapstructure distinfo 
88:SHA256 (gohugo/mitchellh-mapstructure-v1.0.0_GH0.tar.gz) = 6eddc2ee4c69177e6b3a47e134277663f7e70b1f23b6f05908503db9d5ad5457
89:SIZE (gohugo/mitchellh-mapstructure-v1.0.0_GH0.tar.gz) = 18841

OK, so only “mitchellh/mapstructure” made it into distinfo…Just for fun, let’s put “bep/mapstructure” below “mitchellh/mapstructure” in the Makefile and see what happens:

# grep -n mapstructure Makefile
62:		mitchellh:mapstructure:v1.0.0:mapstructure/src/github.com/mitchellh/mapstructure \
63:		bep:mapstructure:bb74f1d:mapstructure/src/github.com/bep/mapstructure \

# make makesum

# grep -n mapstructure distinfo
88:SHA256 (gohugo/bep-mapstructure-bb74f1d_GH0.tar.gz) = 5bf27fc22a2feb060c65ff643880a8ac180fac9326a86b82d6a3eabe78fa9738
89:SIZE (gohugo/bep-mapstructure-bb74f1d_GH0.tar.gz) = 18666

Ah ha! Right, something is causing later declarations of the same repo name to overwrite earlier ones.
At this point I took to #freebsd-ports on Freenode and asked some questions. After a few comments about the reason this was needed, and a reminder that Go based Ports must be fully-contained, vishwin suggested I take a look at bsd.sites.mk at around line 371 - this is where USE_GITHUB is defined.

Opening up bsd.sites.mk, I realised that my Makefile skills were severely lacking! However, looking above line 371 I noted this:

    352 # In order to use GitHub your port must define USE_GITHUB and the following
    353 # variables:
    354 #
    355 # GH_ACCOUNT    - account name of the GitHub user hosting the project
    356 #                 default: ${PORTNAME}
    357 #
    358 # GH_PROJECT    - name of the project on GitHub
    359 #                 default: ${PORTNAME}
    360 #
    361 # GH_TAGNAME    - name of the tag to download (2.0.1, hash, ...)
    362 #                 Using the name of a branch here is incorrect. It is
    363 #                 possible to do GH_TAGNAME= GIT_HASH to do a snapshot.
    364 #                 default: ${DISTVERSION}
    365 #
    366 # GH_SUBDIR     - directory relative to WRKSRC where to move this distfile's
    367 #                 content after extracting.
    368 #
    369 # GH_TUPLE      - above shortened to account:project:tagname[:group][/subdir]
    370 #
    371 .if defined(USE_GITHUB)

The definition of GH_TUPLE is interesting, having looked at other Ports Makefiles for examples, I had always seen the [:group] part to be a repeat of project, but it is not.

Looking in the FreeBSD Porters Handbook, specifically at Fetching Multiple Files from GitHub, Example 5.15 shows the following:

PORTNAME=	foo
DISTVERSION=	1.0.2

USE_GITHUB=	yes
GH_ACCOUNT=	bar:icons,contrib
GH_PROJECT=	foo-icons:icons foo-contrib:contrib
GH_TAGNAME=	1.0:icons fa579bc:contrib
GH_SUBDIR=	ext/icons:icons

CONFIGURE_ARGS=	--with-contrib=${WRKSRC_contrib}

This will fetch three distribution files from github. The default one comes from foo/foo and is version 1.0.2. The second one, with the icons group, comes from bar/foo-icons and is in version 1.0. The third one comes from bar/foo-contrib and uses the Git commit fa579bc. The distribution files are named foo-foo-1.0.2_GH0.tar.gz, bar-foo-icons-1.0_GH0.tar.gz, and bar-foo-contrib-fa579bc_GH0.tar.gz.

What the above is doing is defining groups (“icons” and “contrib”) on the GH_ACCOUNT line, then using these groups on subsequent lines to collect details for each project. Effectively, it says:

  1. Download the project foo from account foo that is tagged 1.0.2
  2. Download project foo-icons from project bar that is tagged 1.0 and move extracted files to ext/icons
  3. Download project foo-contrib from account bar that is commit fa579bc

Although in my opinion this usage is a mess, it is obvious that by using these groups we can group together multiple github entries.

Now moving onto Example 5.16 which shows the usage of GH_TUPLE:

PORTNAME=	foo
DISTVERSION=	1.0.2

USE_GITHUB=	yes
GH_TUPLE=	bar:foo-icons:1.0:icons/ext/icons \
		bar:foo-contrib:fa579bc:contrib

CONFIGURE_ARGS=	--with-contrib=${WRKSRC_contrib}

Grouping was used in the previous example with bar:icons,contrib. Some redundant information is present with GH_TUPLE because grouping is not possible.

When I first read the documentation, I struggled with that last bit. I took it to mean that no grouping was done. However, I started to wonder whether it did matter. I changed the group from project to account_project:

# grep -n mapstructure Makefile
31:		bep:mapstructure:bb74f1d:bep_mapstructure/src/github.com/bep/mapstructure \
63:		mitchellh:mapstructure:v1.0.0:mitchellh_mapstructure/src/github.com/mitchellh/mapstructure \

After doing another make, it worked! Grouping was being done on GH_TUPLE entries, and I don’t think it’s meant to! I have filed Bug 234468 describing the issue, in part to open discussion, and in part to either clarify the documentation or get Mk/bsd_sites.mk changed to ignore groups in GH_TUPLE entries.

I’ve updated my hacky helper scripts to take this into account, and I’ve updated the Port successfully twice now.

Ansible ad-hoc command on multiple hosts

Quite often I want to run an ad-hoc command against a number of hosts, usually this is a subset of an existing group (and often new nodes added to a group).

Let’s say we had an inventory that looks like this:

[webhosts]
web1
web2
web5

[jenjins]
j1
j2
j3
mac-j2
mac-j3
wj1

We’ve just added mac-j2 and mac-j3, run some Playbook against them, but realised we wanted to reboot them after the Playbook for some reason.

Until today, I thought I had two choices:

  1. Put these two hosts into a separate, temporary, group.
  2. Run the ad-hoc command against each machine individually.

I would usually do the first when I had many hosts, amd the second if I had two or three.

Today I discovered a third choice (certainly works in anisble 2.7, may work in earlier versions):

  • Using spaces, list machines within quotation marks.

So, we can do something like this:

ansible 'mac-j2 mac-j3' -i boxen -m reboot --become

VNC over SSH via a Raspberry Pi

Often when I’m away from home I leave my iMac on in case I need to grab anything from it remotely, plus it’s ready to go when I get home. Usually I access it using SSH, via a Raspberry Pi:

+-----------+              +---------------+
| Internet  +--------------> Raspberry Pi  |
+-----------+              +--+------------+
                              |
                              |
                           +--v----+
                           | iMac  |
                           +-------+

This is fine for SSH, but when I want to use other things (for example, VNC) it can be a challenge.

The following is mainly for me, but hopefully helpful to others:

  1. On the Raspberry Pi under /etc.ssh/sshd_config, make sure that GatewayPorts is set to yes (GatewayPorts yes); if you needed to change it, then go ahead and restart SSHd (possibly service sshd restart). This will allow remotely forwarded ports to bind to all interfaces.
  2. SSH to the iMac (via the Raspberry Pi) and then SSH to the Raspberry Pi from the iMac using the following: ssh -R 5900:localhost:5900 user@raspi - this will forward port 5900 (the VNC port), making it available on the Raspberry Pi on the same port.
  3. On the system you are currently sitting at, SSH to the Raspberry Pi using the following: ssh -L 5901:localhost:5900 user@raspi.home - this will forward port 5900 from the Raspberry Pi to 5901 on your localhost (5901 is chosen so it doesn’t clash with a local VNC service)
  4. Point your VNC viewer at localhost:5901 and enjoy!

acme.sh, plus Linode, plus DNS, plus FreeBSD

I’ve been meaning to use Let’s Encrypt for some time now, I don’t really have a good excuse as to why it’s taken so long, other than I wanted to use DNS to verify I owned the relevant domains, and I hadn’t found an easy enough tool to use.

My lame excuse faltered when Dan Langille ported the acme.sh client to FreeBSD.

It’s taken me a while to figure out exactly how I aught to use it, as I wasn’t 100% about what I was doing. But after a few false starts, I’ve placed my first certificates into use!

This post describes the steps I’ve taken to get the certs in place, and is mainly documentation for me later on. That said, I hope it’s general enough for others to find it helpful.

1. Installing

First off, we need to install acme.sh, so as root we can do the following:

# pkg install acme.sh

This does a number of things, but most importantly it creates an acme user with the relevant files to start configuring.

2. Configuration

Switching to the acme user, there should be an .acme.sh directory (note the leading full stop to make it hidden), and it is here we create our account.conf:

$ cat .acme.sh/account.conf
USER_PATH='/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/var/db/acme/bin'
LINODE_API_KEY='aVeryLongSeeminglyRandomString'
DEFAULT_DNS_SLEEP="900"
CERT_HOME="/var/db/acme/certs"
LOG_FILE='/var/db/acme/logs/acme.sh.log'

The LINODE_API_KEY is generated by going into the Linode Manager, clicking on “my profile”, and selecting “API Keys” from the submenu. Create a new key and make sure you save it, it won’t be shown in full again! The DEFAULT_DNS_SLEEP is set to 900 seconds (15 minutes) because this is the time between Linode DNS refreshes.

3. Issuing a certificate

Now we’ve done the configuration, we can issue the certificate:

$ acme.sh --issue --dns dns_linode -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club 
[Wed Nov 1 21:22:00 BST 2017] Creating domain key
[Wed Nov 1 21:22:00 BST 2017] The domain key is here: /var/db/acme/certs/bnix.club/bnix.club.key
[Wed Nov 1 21:22:00 BST 2017] Multi domain='DNS:logs.bnix.club,DNS:f.bnix.club,DNS:www.bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting domain auth token for each domain
[Wed Nov 1 21:22:00 BST 2017] Getting webroot for domain='bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting new-authz for domain='bnix.club'
[Wed Nov 1 21:22:02 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:02 BST 2017] Getting webroot for domain='logs.bnix.club'
[Wed Nov 1 21:22:02 BST 2017] Getting new-authz for domain='logs.bnix.club'
[Wed Nov 1 21:22:03 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:03 BST 2017] Getting webroot for domain='f.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:05 BST 2017] Getting webroot for domain='www.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] Getting new-authz for domain='www.bnix.club'
[Wed Nov 1 21:22:06 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:06 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:06 BST 2017] Using Linode
[Wed Nov 1 21:22:08 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:08 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:08 BST 2017] Using Linode
[Wed Nov 1 21:22:09 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:09 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:09 BST 2017] Using Linode
[Wed Nov 1 21:22:11 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:11 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:11 BST 2017] Using Linode
[Wed Nov 1 21:22:13 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:13 BST 2017] Sleep 900 seconds for the txt records to take effect
[Wed Nov 1 21:37:57 BST 2017] Verifying:bnix.club
[Wed Nov 1 21:38:01 BST 2017] Success
[Wed Nov 1 21:38:01 BST 2017] Verifying:logs.bnix.club
[Wed Nov 1 21:38:05 BST 2017] Success
[Wed Nov 1 21:38:05 BST 2017] Verifying:f.bnix.club
[Wed Nov 1 21:38:09 BST 2017] Success
[Wed Nov 1 21:38:10 BST 2017] Verifying:www.bnix.club
[Wed Nov 1 21:38:14 BST 2017] Success
[Wed Nov 1 21:38:14 BST 2017] Using Linode
[Wed Nov 1 21:38:16 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:16 BST 2017] Using Linode
[Wed Nov 1 21:38:18 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:18 BST 2017] Using Linode
[Wed Nov 1 21:38:20 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:20 BST 2017] Using Linode
[Wed Nov 1 21:38:22 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:22 BST 2017] Verify finished, start to sign.
[Wed Nov 1 21:38:24 BST 2017] Cert success.
-----BEGIN CERTIFICATE-----
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
v5y$RaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNgRaNdOmStRiNg
-----END CERTIFICATE-----
[Wed Nov 1 21:38:24 BST 2017] Your cert is in  /var/db/acme/certs/bnix.club/bnix.club.cer 
[Wed Nov 1 21:38:24 BST 2017] Your cert key is in  /var/db/acme/certs/bnix.club/bnix.club.key 
[Wed Nov 1 21:38:25 BST 2017] The intermediate CA cert is in  /var/db/acme/certs/bnix.club/ca.cer 
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there:  /var/db/acme/certs/bnix.club/fullchain.cer 
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there:  /var/db/acme/certs/bnix.club/fullchain.cer 

The command tells acme.sh to issue a new certificate using Linode DNS entries for the list of sites (each address is preceded with a -d). We now have a certificate sitting in the certs directory (as instructed in our account file). Now we just need to install the certificates.

4. Installing certificates

I’ve opted to allow the acme user to write to the directory where these certificates will be installed, these will then be readable by the www user that nginx runs as.

$ mkdir -p /usr/local/etc/ssl/bnix
$ acme.sh --install-cert -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club --key-file /usr/local/etc/ssl/bnix/privkey.pem --fullchain-file /usr/local/etc/ssl/bnix/fullchain.pem --reloadcmd "sleep 65 && touch /var/db/acme/.restart_nginx"
[Mon Oct 23 21:56:44 BST 2017] Installing key to:/usr/local/etc/ssl/bnix/privkey.pem
[Mon Oct 23 21:56:44 BST 2017] Installing full chain to:/usr/local/etc/ssl/bnix/fullchain.pem
[Mon Oct 23 21:56:44 BST 2017] Run reload cmd: sleep 65 && touch /var/db/acme/.restart_nginx
[Mon Oct 23 21:57:49 BST 2017] Reload success

You’ll notice the odd reload command there. I don’t want to give the acme user direct permission to restart nginx, so instead I wait for a time and create restart file. I then have the following script in my root users directory:

#!/bin/sh
if [ -f /var/db/acme/.restart_nginx ]; then
    service nginx forcereload
    rm -rf /var/db/acme/.restart_nginx
fi

The the following in /etc/crontab:

#minute hour    mday    month   wday    who command
*   *   *   *   *   root    /bin/sh /root/scripts/restart_nginx.sh

This means that every minute, root checks to see if that file is created, if it is then root restarts nginx and removes the file.

5. Renewing certificates

In order to renew certificates, the acme user must check once a day (using cron):

#minute hour    mday    month   wday    command
43 0 * * * /usr/local/sbin/acme.sh --cron --home "/var/db/acme/.acme.sh"

This will cause cron to run the acme.sh script every day at 00:43

Please note: Please choose another time other than 00:43 to spread the load on both Linode’s DNS servers and the Let’s Encrypt servers.

freebsd-upgrade In A Boot Environment

Yesterday FreeBSD 11.1 was released. Once I got into work I started upgrading the VM I use for day to day activities. After creating a Boot Environment (BE) using beadm(1), and running the upgrade and install parts of freebsd-update(8), I rebooted into a newly activated BE only to find I had an 11.1 kernel, but a 11.0 userland…

I had no idea what I’d done wrong. After some questions on the FreeBSD Forums, I figured it out. Previously, I had only run the install process once; the install process needs to be done three times:

  1. Install the kernel
  2. Install userland
  3. Cleanup

Usually, one would reboot between these actions. This is what I had attempted, but when I rebooted into my new BE I got the familiar message:

No updates are available to install.
Run '/usr/sbin/freebsd-update fetch' first.

So, what’s the correct procedure? Hunting across the Internet, I found many examples of how people thought it should be done. Most commonly was:

  1. Create a BE
  2. Activate the BE
  3. Reboot into the BE
  4. Fetch the upgrade
  5. Install the upgraded kernel
  6. Reboot
  7. Install upgraded userland
  8. Reboot (sometimes this seemed optional)
  9. Run install again for cleanup
  10. Reboot

Now, that seems like an awful lot of downtime. Back at Sun, BEs were introduced to me as a convenient roll-back method if things go wrong, and also to reduce downtime caused by upgrade (this was called Live Upgrade). Having all of this downtime did not appeal to me one bit.

So, how can we reduce downtime while using FreeBSD Boot Environments? Run all three installation tasks one after each other. Since we are upgrading an essentially dormant system (the BE hasn’t been activated and rebooted into yet) we don’t need to do the in-between reboots. Here’s my process:

Now you can keep the previous BE around until you’re happy everything is working and then destroy it.

I’ve not tried it, but I see no reason why this wouldn’t work for updates (e.g. 11.0-RELEASE-p0 to 11.0-RELEASE-p1) too.

Happy upgrading!

BSD Desktop Week

Last week on Twitter I was promoting BSD on the desktop.

I got a small flurry of “likes” and “retweets” regarding a number of posts, and one (I think) real person even posted to #BSDdesktopWeek!

Nobody that I know of took my up on the offer of switching their everyday desktop to BSD for the week, but then I did only start promoting it the Friday before it started…

But why did I want to promote BSD on the desktop?  Firstly, pretty much all the arguments we had a few years ago about why Linux was good for the desktop are pretty much true for BSD.  Secondly, since I started using FreeBSD on a laptop at home I have realised just how well engineered the system is, how logical everything feels, and how great the community is.

With discovering the second point above, I have begun to switch my Linux servers to FreeBSD.  With the power of Jails and ZFS, I not have one virtual machine running three different services (persistent IRC client, GitLab server, and ownCloud server) all segregated from each other and the whole thing is using less than 20GB of storage.  Management is very easy and super configurable.

Since I came to FreeBSD as a server via the desktop, it is somewhat my hope that casual (techie) desktop users who use a BSD everyday on the desktop might choose a BSD for any future server requirements.  Techie users would also bring with them a wealth of knowledge from other systems to improve the BSD in untold ways, and just playing the number game; more users would encourage software to become more portable and BSD friendly.

How was my week using BSD on the desktop?  I must confess I missed OS X, and it didn’t help that I ran FreeBSD in VirtualBox on my Mac.  I missed single click, Magic Mouse support, keyboard shortcuts (e.g. for generating a hyphen instead of a dash, or printing typographic quotation marks), and certain applications that only run on OS X (mainly Tweetbot and Reeder). Although some applications, like 1Password, would run well in Wine others did not (like Dropbox)—1Password uses Dropbox to sync files, so bummer! But for general desktopy stuff, it worked really nicely.  I happily did email, wrote most of this blog post, watched YouTube, did some perl script editing, some research, etc.  It just worked as a functional desktop.

In the middle of the week, I found a really beautiful email client that is still in development called N1.  I downloaded the source and attempted to compile, but no luck.  Having opened a ticket, the developers are making an attempt to make sure this works on FreeBSD as well as the other supported systems—which I think is awesome of them!

Next year I think I’ll start promoting earlier, and perhaps try to draw in some support from other BSD users.

Raspberry Pi USB-GPIO OS X and FreeBSD

I play with my Raspberry Pi so rarely that I forget how to use my CP2102 serial converter to connect from my iMac or FreeBSD laptop to the Raspberry Pi, so I thought I’d write a blog post and then I’d have an easy place to go back to remember how…

Connecting the cables

Raspberry Pi USB UART Cabling by Ben Lavery is licensed under a Creative Commons Attribution 4.0 International License. Raspberry Pi Model B connected to USB—UART Adaptor. Click for bigger image.

On a Mac

  1. Acquire a CP2102 serial converter
  2. Download the driver (direct link to zip file)
  3. Reboot
  4. Attach Raspberry Pi using a USB 2.0 or older port (not USB 3)
  5. Open up Terminal.app and type: screen -fn /dev/cu.SLAB_USBtoUART 115200 The fn flag disables flow-control And you’re done!

On FreeBSD

  1. Acquire a CP2102 serial converter
  2. Load uslcom.ko—either add it to loader.conf, compile it into the kernel, or as root do: kldload uslcom.ko

  3. Attach Raspberry Pi via any USB port

  4. Open up a terminal and as root (or via sudo), type: cu -l /dev/ttyU0 -s 115200

And you’re done!