Nov 5, 2017 · 6 minute read · Comments
I’ve been meaning to use Let’s Encrypt for some time now, I don’t really have a good excuse as to why it’s taken so long, other than I wanted to use DNS to verify I owned the relevant domains, and I hadn’t found an easy enough tool to use.
My lame excuse faltered when Dan Langille ported the acme.sh client to FreeBSD.
It’s taken me a while to figure out exactly how I aught to use it, as I wasn’t 100% about what I was doing. But after a few false starts, I’ve placed my first certificates into use!
This post describes the steps I’ve taken to get the certs in place, and is mainly documentation for me later on. That said, I hope it’s general enough for others to find it helpful.
First off, we need to install acme.sh, so as root we can do the following:
# pkg install acme.sh
This does a number of things, but most importantly it creates an acme user with the relevant files to start configuring.
Switching to the acme user, there should be an .acme.sh directory (note the leading full stop to make it hidden), and it is here we create our account.conf:
$ cat .acme.sh/account.conf
The LINODE_API_KEY is generated by going into the Linode Manager, clicking on “my profile”, and selecting “API Keys” from the submenu. Create a new key and make sure you save it, it won’t be shown in full again!
The DEFAULT_DNS_SLEEP is set to 900 seconds (15 minutes) because this is the time between Linode DNS refreshes.
3. Issuing a certificate
Now we’ve done the configuration, we can issue the certificate:
$ acme.sh --issue --dns dns_linode -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club
[Wed Nov 1 21:22:00 BST 2017] Creating domain key
[Wed Nov 1 21:22:00 BST 2017] The domain key is here: /var/db/acme/certs/bnix.club/bnix.club.key
[Wed Nov 1 21:22:00 BST 2017] Multi domain='DNS:logs.bnix.club,DNS:f.bnix.club,DNS:www.bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting domain auth token for each domain
[Wed Nov 1 21:22:00 BST 2017] Getting webroot for domain='bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting new-authz for domain='bnix.club'
[Wed Nov 1 21:22:02 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:02 BST 2017] Getting webroot for domain='logs.bnix.club'
[Wed Nov 1 21:22:02 BST 2017] Getting new-authz for domain='logs.bnix.club'
[Wed Nov 1 21:22:03 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:03 BST 2017] Getting webroot for domain='f.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:05 BST 2017] Getting webroot for domain='www.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] Getting new-authz for domain='www.bnix.club'
[Wed Nov 1 21:22:06 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:06 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:06 BST 2017] Using Linode
[Wed Nov 1 21:22:08 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:08 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:08 BST 2017] Using Linode
[Wed Nov 1 21:22:09 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:09 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:09 BST 2017] Using Linode
[Wed Nov 1 21:22:11 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:11 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:11 BST 2017] Using Linode
[Wed Nov 1 21:22:13 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:13 BST 2017] Sleep 900 seconds for the txt records to take effect
[Wed Nov 1 21:37:57 BST 2017] Verifying:bnix.club
[Wed Nov 1 21:38:01 BST 2017] Success
[Wed Nov 1 21:38:01 BST 2017] Verifying:logs.bnix.club
[Wed Nov 1 21:38:05 BST 2017] Success
[Wed Nov 1 21:38:05 BST 2017] Verifying:f.bnix.club
[Wed Nov 1 21:38:09 BST 2017] Success
[Wed Nov 1 21:38:10 BST 2017] Verifying:www.bnix.club
[Wed Nov 1 21:38:14 BST 2017] Success
[Wed Nov 1 21:38:14 BST 2017] Using Linode
[Wed Nov 1 21:38:16 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:16 BST 2017] Using Linode
[Wed Nov 1 21:38:18 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:18 BST 2017] Using Linode
[Wed Nov 1 21:38:20 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:20 BST 2017] Using Linode
[Wed Nov 1 21:38:22 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:22 BST 2017] Verify finished, start to sign.
[Wed Nov 1 21:38:24 BST 2017] Cert success.
[Wed Nov 1 21:38:24 BST 2017] Your cert is in /var/db/acme/certs/bnix.club/bnix.club.cer
[Wed Nov 1 21:38:24 BST 2017] Your cert key is in /var/db/acme/certs/bnix.club/bnix.club.key
[Wed Nov 1 21:38:25 BST 2017] The intermediate CA cert is in /var/db/acme/certs/bnix.club/ca.cer
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there: /var/db/acme/certs/bnix.club/fullchain.cer
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there: /var/db/acme/certs/bnix.club/fullchain.cer
The command tells acme.sh to issue a new certificate using Linode DNS entries for the list of sites (each address is preceded with a
-d). We now have a certificate sitting in the certs directory (as instructed in our account file). Now we just need to install the certificates.
4. Installing certificates
I’ve opted to allow the acme user to write to the directory where these certificates will be installed, these will then be readable by the www user that nginx runs as.
$ mkdir -p /usr/local/etc/ssl/bnix
$ acme.sh --install-cert -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club --leypatkeypath /usr/local/etc/ssl/bnix/privssl/bnix/fullchain.pem --reloadcmd "sleep 65 && touch /var/db/acme/.restart_nginx"
[Mon Oct 23 21:56:44 BST 2017] Installing key to:/usr/local/etc/ssl/bnix/privkey.pem
[Mon Oct 23 21:56:44 BST 2017] Installing full chain to:/usr/local/etc/ssl/bnix/fullchain.pem
[Mon Oct 23 21:56:44 BST 2017] Run reload cmd: sleep 65 && touch /var/db/acme/.restart_nginx
[Mon Oct 23 21:57:49 BST 2017] Reload success
You’ll notice the odd reload command there. I don’t want to give the acme user direct permission to restart nginx, so instead I wait for a time and create restart file. I think have the following script in my root users directory:
if [ -f /var/db/acme/.restart_nginx ]; then
service nginx force-reload
rm -rf /var/db/acme/.restart_nginx
The the following in
#minute hour mday month wday who command
* * * * * root /bin/sh /root/scripts/restart_nginx.sh
This means that every minute, root checks to see if that file is created, if it is then root restarts nginx and removes the file.
5. Renewing certificates
In order to renew certificates, the acme user must check once a day (using cron):
#minute hour mday month wday command
43 0 * * * /usr/local/sbin/acme.sh --cron --home "/var/db/acme/acme.sh"
This will cause cron to run the acme.sh script every day at 00:43
Please note: Please choose another time other than 00:43 to spread the load on both Linode’s DNS servers and the Let’s Encrypt servers.
Jul 27, 2017 · 2 minute read · Comments
Yesterday FreeBSD 11.1 was released. Once I got into work I started upgrading the VM I use for day to day activities. After creating a Boot Environment (BE) using beadm(1), and running the upgrade and install parts of freebsd-update(8), I rebooted into a newly activated BE only to find I had an 11.1 kernel, but a 11.0 userland…
I had no idea what I’d done wrong. After some questions on the FreeBSD Forums, I figured it out. Previously, I had only run the install process once; the install process needs to be done three times:
- Install the kernel
- Install userland
Usually, one would reboot between these actions. This is what I had attempted, but when I rebooted into my new BE I got the familiar message:
No updates are available to install.
Run '/usr/sbin/freebsd-update fetch' first.
So, what’s the correct procedure? Hunting across the Internet, I found many examples of how people thought it should be done. Most commonly was:
- Create a BE
- Activate the BE
- Reboot into the BE
- Fetch the upgrade
- Install the upgraded kernel
- Install upgraded userland
- Reboot (sometimes this seemed optional)
- Run install again for cleanup
Now, that seems like an awful lot of downtime. Back at Sun, BEs were introduced to me as a convenient roll-back method if things go wrong, and also to reduce downtime caused by upgrade (this was called Live Upgrade). Having all of this downtime did not appeal to me one bit.
So, how can we reduce downtime while using FreeBSD Boot Environments? Run all three installation tasks one after each other. Since we are upgrading an essentially dormant system (the BE hasn’t been activated and rebooted into yet) we don’t need to do the in-between reboots. Here’s my process:
Now you can keep the previous BE around until you’re happy everything is working and then destroy it.
I’ve not tried it, but I see no reason why this wouldn’t work for updates (e.g. 11.0-RELEASE-p0 to 11.0-RELEASE-p1) too.
Feb 8, 2016 · 3 minute read · Comments
Last week on Twitter I was promoting BSD on the desktop.
I got a small flurry of “likes” and “retweets” regarding a number of posts, and one (I think) real person even posted to #BSDdesktopWeek!
Nobody that I know of took my up on the offer of switching their everyday desktop to BSD for the week, but then I did only start promoting it the Friday before it started…
But why did I want to promote BSD on the desktop? Firstly, pretty much all the arguments we had a few years ago about why Linux was good for the desktop are pretty much true for BSD. Secondly, since I started using FreeBSD on a laptop at home I have realised just how well engineered the system is, how logical everything feels, and how great the community is.
With discovering the second point above, I have begun to switch my Linux servers to FreeBSD. With the power of Jails and ZFS, I not have one virtual machine running three different services (persistent IRC client, GitLab server, and ownCloud server) all segregated from each other and the whole thing is using less than 20GB of storage. Management is very easy and super configurable.
Since I came to FreeBSD as a server via the desktop, it is somewhat my hope that casual (techie) desktop users who use a BSD everyday on the desktop might choose a BSD for any future server requirements. Techie users would also bring with them a wealth of knowledge from other systems to improve the BSD in untold ways, and just playing the number game; more users would encourage software to become more portable and BSD friendly.
How was my week using BSD on the desktop? I must confess I missed OS X, and it didn’t help that I ran FreeBSD in VirtualBox on my Mac. I missed single click, Magic Mouse support, keyboard shortcuts (e.g. for generating a hyphen instead of a dash, or printing typographic quotation marks), and certain applications that only run on OS X (mainly Tweetbot and Reeder). Although some applications, like 1Password, would run well in Wine others did not (like Dropbox)—1Password uses Dropbox to sync files, so bummer!
But for general desktopy stuff, it worked really nicely. I happily did email, wrote most of this blog post, watched YouTube, did some perl script editing, some research, etc. It just worked as a functional desktop.
In the middle of the week, I found a really beautiful email client that is still in development called N1. I downloaded the source and attempted to compile, but no luck. Having opened a ticket, the developers are making an attempt to make sure this works on FreeBSD as well as the other supported systems—which I think is awesome of them!
Next year I think I’ll start promoting earlier, and perhaps try to draw in some support from other BSD users.
Dec 29, 2015 · 1 minute read · Comments
I play with my Raspberry Pi so rarely that I forget how to use my CP2102 serial converter to connect from my iMac or FreeBSD laptop to the Raspberry Pi, so I thought I’d write a blog post and then I’d have an easy place to go back to remember how…
Connecting the cables
Raspberry Pi Model B connected to USB—UART Adaptor. Click for bigger image.
On a Mac
- Aquire a CP2102 serial converter
- Download the driver (direct link to zip file)
- Attach Raspberry Pi using a USB 2.0 or older port (not USB 3)
- Open up Terminal.app and type:
screen -fn /dev/cu.SLAB_USBtoUART 115200
The fn flag disables flow-control
And you’re done!
- Aquire a CP2102 serial converter
Load uslcom.ko—either add it to loader.conf, compile it into the kernel, or as root do:
Attach Raspberry Pi via any USB port
Open up a terminal and as root (or via sudo), type:
cu -l /dev/ttyU0 -s 115200
And you’re done!
Aug 4, 2015 · 1 minute read · Comments
At work I deploy Red Hat Enterprise Linux VMs, for a variety of reasons, mostly by hand.
One of the steps I loath is setting up the network, it’s almost the only thing that truly requires manually tapping each character out. I have, however, learnt this bash one-liner such that I type it out without thinking:
Simply replace “eth0” with whatever interface you want the MAC address from and redirect it into the relevant ifcfg- file, edit said file with your favourite editor and prepend “HWADDR=” to the line with the MAC address on.
Apr 21, 2015 · 2 minute read · Comments
At home I use an iMac. When I’m away from my desk I use an iPhone 6. At work, I’m forced to deal with Windows (though use Linux/BSD VMs where possible).
I have a lot of software on my Mac, a number of apps are “document based” though manage those document internally. Some of these apps talk really nicely across many of the platforms I use (e.g. Evernote), other software works incredibly well within the Apple ecosystem (e.g. OmniFocus).
Lots of the software I use is “document based” in the real sense of the term, you click “save” and it spurts out a document that resides on your filesystem and these can be accessed on multiple platforms with various syncing services (e.g. ownCloud, or Dropbox).
Then there is iWork. The trio of apps can save a file to your desktop, or shove it in iCloud. It works really well on OS X, and on iOS, and a host of other platforms thanks to iCloud.com.
In 2013, Apple release iWork for iCloud beta. Users with supported browsers (officially Safari 6.0.3+, IE 9.0.8+, Chrome 27.0.1+) on (I presume) any platform can get access to any of their iWork documents which live in iCloud. How cool is that!? So now I can have a look over my budget on my work computer while trying to sort out car insurance in my lunch hour, or give a presentation put together on my Mac on someone else Linux workstation.
At WWDC 2014, Apple announced CloudKit. CloudKit gives developers some server side infrastructure so that they can think about programming the application, and not get caught up thinking about server logic. CloudKit provides:
- iCloud Authentication
- Asset Storage
- Database Storage (both general/public, and per user/private)
So, Apple already has apps (albeit in beta) on iCloud.com which make some of their core apps cross-platform. So too have they started developers thinking about the cloud.
Could this give developers a means to deploy their apps onto iCloud.com and get them out to users on platforms other than OS X and iOS? Users, by the way, that likely already have an Apple ID and given up their payment methods thanks to over a decade of iDevices and iTunes.
Mar 19, 2015 · 1 minute read · Comments
Over the past few years I’ve needed to recover files from various USB sticks and SD cards using my Mac. I’ve recently needed to do this again, and every time I’m asked I always forget the application I use! So I’m created a blog post so it’s easy to search for!
The application I use is TestDisk. It’s a command line application and can recover files from a number of different file systems. It’s fairly easy to use once you’ve read the website, and being CLI based it has the advantage that you can SSH into your friends machine and work with them over a phone/Skype/FaceTime call along with a shared screen or tmux session!
Jan 25, 2015 · 3 minute read · Comments
I have mixed feelings about Google. On the one hand their search engine is next to ubiquitous, most browsers come with it as the default, and they also have some pretty excellent services. On the other hand, and perhaps this is my Apple synapse firing, I see them as a new enemy, and one that isn’t doing much to try and win me over.
In the past few years a few Google services that I have used (and some that I didn’t) have been axed. The likes of Reader, Latitude, XMPP, CalDAV (actually it appears they have revised this), and ActiveSync for rival platforms have all been killed off or rolled into Google’s social networking site: Google Plus. They’ve forked WebKit into a new project (Blink), to which there are pros and cons. We’re even seeing services and applications being built solely for Google Chrome (OK, “More browsers coming soon”…), which is damaging for an open internet.
My biggest issue here is email: if Google decided to turn off IMAP/SMTP I would be forced to use whatever app Google wanted me to use - be that the GMail app for iPhone, webmail, etc. I’d have to stop using what I wanted in order to keep using my email.
Although Google services are free, we all know that they aren’t really. Google (and others) collect data about you and sell it on - hence they can afford to keep making cool stuff and give it away for ‘free’.
The problem with using for a ‘free’ service is that you haven’t invested any money into the service, the provider owes nothing to you. The other problem is what data is being collected about you, who is that data being sold to, and what might that data tell them (rightly or wrongly!)
Given the above, I have decided the distance myself from Google. Some of the services that Google offer are still of use to me and there aren’t the same, ubiquitous, services available elsewhere (e.g. translate). I have cut away from Chrome, Gmail, and even Google Search.
Today I surf the web using Safari at home and Firefox at work, I use DuckDuckGo to search the web, and for email I’ve subscribed to Zoho which allows me to use my own domain name. My mapping needs are adequately met using Apple’s Maps. Apart from Google Translate, I’m not sure there is much I use Google for anymore. It’s been a fairly long road, and there is further I can push it, perhaps by taking what I’ve learnt to sacrifice and further distancing myself from companies that do/may gather personal data (anonymised or otherwise).
There are likely some usages that are either out of my control, or where I’d have to more deeply customise my computer setups, where various services pull/push data to/from Google behind the scenes, but for the time being I’m not going to let that worry me.
Bootnote: This blog post was drafted back in 2013 when I started to “deGoogle” my life. I’ve quickly zipped through and updated a few bits.
Jan 5, 2015 · 1 minute read · Comments
Taken from Small Labs Inc., here’s a histogram of my most used commands:
cd 289 ############################################################
ls 282 ###########################################################
git 158 #################################
hi 76 ################
vim 75 ################
for 72 ###############
ssh 70 ###############
sudo 52 ###########
java 48 ##########
rm 46 ##########
cat 38 ########
brew 38 ########
man 35 ########
less 34 ########
find 26 ######
scp 23 #####
ps 23 #####
“hi 17 ####
lorem 16 ####
top 13 ###
Have a try at making your own with the code from the Small Labs Inc. website!
May 30, 2014 · 4 minute read · Comments
Technology moves so fast, doesn’t it? I mean who would want such a battered looking laptop?
It doesn’t look like much, does it?
This first generation MacBook has been in my possession for eight years today. Setting me back just a little over £900 at the time for 1GB RAM, a dual core (32-bit) CPU running at 2GHz, and OS X 10.4 Tiger.
I bought the MacBook for two main reasons
- I wanted a laptop
- I wanted a Mac
The third point - which is what made me justify it at the time - was that I would be taking it to University with me. My only real requirement at the time was that it last, and it has - mostly.
The problems I’ve had
Overheating (~6 months in) - a fault with the original design (or so I was told) was that one of the heat sensor wires based too close to the CPU. When the CPU warmed up enough, it could melt the wire, this could trigger the logic board to think it was overheating and just shut everything down. With advice from an Apple Retailer and certified engineer, I was able to get a little fix done for free under warranty, which was good as the logic boards weren’t available un the UK yet.
Dead hard drive (~18months in) - a horribly inevitable situation, though I didn’t think it would strike this soon. Apple eventually noted that there was a problem, but it was after I had fixed it myself, at a cost of £80.
Melting power cable (~24 months in) - I noticed which on a great to Scotland that the power kept current out while charging. After some inspection it turned out that the cable on the MacBook side of the power block had melted through its casing, and was shorting! As the linked article describes, I fixed the issue myself, though in 2012 it started to melt right up hear the MacBook, and I ended up throwing it away and using the one I bought on eBay.
Overheating (~5 years in) - Simple problem, the fan died, and this caused the MacBook to overheat. The fan cost a couple of quid off eBay and it was down for only a couple of days.
Dead wifi (~7 years in) - I think due to excessive heat, the wifi chip died :( Now I have to use an external WiFi dongle…
Obsolescence - Since 2011 I have been unable to update the OS past 10.6 (Snow Leopard) due to the 32-bit CPU. For a long time this didn’t hinder me, and it’s only in the last year or two that I’ve come across some 64-bit only apps, or apps that rely on an OS newer than 10.6.
But despite all of this…
It has been an awesome laptop. Costing around £1000 over its lifetime (currently working out at about £125/year) I still use it most days, and not just for hopping around the web - just this past christmas I was firing up Windows 7 virtual machines with VirtualBox to get train logs analysed (which took some fairly serious number crunching). The apps that I need to run do, and for features that are missing (e.g. bookmarks in iCloud) I use other services (e.g. XMarks).
I had said to myself, eight years ago, that this laptop would have to last me seven years - the ‘arbitrary’ length of time an old-time Mac user told me a Mac would last - and it has outdone itself. True it’s not been without problems, but it is still here to tell the tales (unlike some other laptops, both cheaper and more expensive). When the MacBook Pro with Retina display was announced, I longed for one and I still do, but due to the fact that these things aren’t cheap, and this little fellow is trooping along, I can’t really justify it now.
Long live Faegilath (forgotten elvish meaning)!