Tuesday, 13 Nov 2018 14:33:20 · 1 minute read · Comments
Quite often I want to run an ad-hoc command against a number of hosts, usually this is a subset of an existing group (and often new nodes added to a group).
Let’s say we had an inventory that looks like this:
We’ve just added
mac-j3, run some Playbook against them, but realised we wanted to reboot them after the Playbook for some reason.
Until today, I thought I had two choices:
- Put these two hosts into a separate, temporary, group.
- Run the ad-hoc command against each machine individually.
I would usually do the first when I had many hosts, amd the second if I had two or three.
Today I discovered a third choice (certainly works in anisble 2.7, may work in earlier versions):
- Using spaces, list machines within quotation marks.
So, we can do something like this:
ansible 'mac-j2 mac-j3' -i boxen -m reboot --become
Wednesday, 21 Mar 2018 19:16:27 · 2 minute read · Comments
Often when I’m away from home I leave my iMac on in case I need to grab anything from it remotely, plus it’s ready to go when I get home. Usually I access it using SSH, via a Raspberry Pi:
| Internet +--------------> Raspberry Pi |
| iMac |
This is fine for SSH, but when I want to use other things (for example, VNC) it can be a challenge.
The following is mainly for me, but hopefully helpful to others:
- On the Raspberry Pi under
/etc.ssh/sshd_config, make sure that
GatewayPorts is set to
GatewayPorts yes); if you needed to change it, then go ahead and restart SSHd (possibly
service sshd restart). This will allow remotely forwarded ports to bind to all interfaces.
- SSH to the iMac (via the Raspberry Pi) and then SSH to the Raspberry Pi from the iMac using the following:
ssh -R 5900:localhost:5900 user@raspi - this will forward port 5900 (the VNC port), making it available on the Raspberry Pi on the same port.
- On the system you are currently sitting at, SSH to the Raspberry Pi using the following:
ssh -L 5901:localhost:5900 firstname.lastname@example.org - this will forward port 5900 from the Raspberry Pi to 5901 on your localhost (5901 is chosen so it doesn’t clash with a local VNC service)
- Point your VNC viewer at
localhost:5901 and enjoy!
Sunday, 05 Nov 2017 22:07:27 · 6 minute read · Comments
I’ve been meaning to use Let’s Encrypt for some time now, I don’t really have a good excuse as to why it’s taken so long, other than I wanted to use DNS to verify I owned the relevant domains, and I hadn’t found an easy enough tool to use.
My lame excuse faltered when Dan Langille ported the acme.sh client to FreeBSD.
It’s taken me a while to figure out exactly how I aught to use it, as I wasn’t 100% about what I was doing. But after a few false starts, I’ve placed my first certificates into use!
This post describes the steps I’ve taken to get the certs in place, and is mainly documentation for me later on. That said, I hope it’s general enough for others to find it helpful.
First off, we need to install acme.sh, so as root we can do the following:
# pkg install acme.sh
This does a number of things, but most importantly it creates an acme user with the relevant files to start configuring.
Switching to the acme user, there should be an .acme.sh directory (note the leading full stop to make it hidden), and it is here we create our account.conf:
$ cat .acme.sh/account.conf
The LINODE_API_KEY is generated by going into the Linode Manager, clicking on “my profile”, and selecting “API Keys” from the submenu. Create a new key and make sure you save it, it won’t be shown in full again!
The DEFAULT_DNS_SLEEP is set to 900 seconds (15 minutes) because this is the time between Linode DNS refreshes.
3. Issuing a certificate
Now we’ve done the configuration, we can issue the certificate:
$ acme.sh --issue --dns dns_linode -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club
[Wed Nov 1 21:22:00 BST 2017] Creating domain key
[Wed Nov 1 21:22:00 BST 2017] The domain key is here: /var/db/acme/certs/bnix.club/bnix.club.key
[Wed Nov 1 21:22:00 BST 2017] Multi domain='DNS:logs.bnix.club,DNS:f.bnix.club,DNS:www.bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting domain auth token for each domain
[Wed Nov 1 21:22:00 BST 2017] Getting webroot for domain='bnix.club'
[Wed Nov 1 21:22:00 BST 2017] Getting new-authz for domain='bnix.club'
[Wed Nov 1 21:22:02 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:02 BST 2017] Getting webroot for domain='logs.bnix.club'
[Wed Nov 1 21:22:02 BST 2017] Getting new-authz for domain='logs.bnix.club'
[Wed Nov 1 21:22:03 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:03 BST 2017] Getting webroot for domain='f.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:05 BST 2017] Getting webroot for domain='www.bnix.club'
[Wed Nov 1 21:22:05 BST 2017] Getting new-authz for domain='www.bnix.club'
[Wed Nov 1 21:22:06 BST 2017] The new-authz request is ok.
[Wed Nov 1 21:22:06 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:06 BST 2017] Using Linode
[Wed Nov 1 21:22:08 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:08 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:08 BST 2017] Using Linode
[Wed Nov 1 21:22:09 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:09 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:09 BST 2017] Using Linode
[Wed Nov 1 21:22:11 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:11 BST 2017] Found domain api file: /var/db/acme/.acme.sh/dnsapi/dns_linode.sh
[Wed Nov 1 21:22:11 BST 2017] Using Linode
[Wed Nov 1 21:22:13 BST 2017] Domain resource successfully added.
[Wed Nov 1 21:22:13 BST 2017] Sleep 900 seconds for the txt records to take effect
[Wed Nov 1 21:37:57 BST 2017] Verifying:bnix.club
[Wed Nov 1 21:38:01 BST 2017] Success
[Wed Nov 1 21:38:01 BST 2017] Verifying:logs.bnix.club
[Wed Nov 1 21:38:05 BST 2017] Success
[Wed Nov 1 21:38:05 BST 2017] Verifying:f.bnix.club
[Wed Nov 1 21:38:09 BST 2017] Success
[Wed Nov 1 21:38:10 BST 2017] Verifying:www.bnix.club
[Wed Nov 1 21:38:14 BST 2017] Success
[Wed Nov 1 21:38:14 BST 2017] Using Linode
[Wed Nov 1 21:38:16 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:16 BST 2017] Using Linode
[Wed Nov 1 21:38:18 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:18 BST 2017] Using Linode
[Wed Nov 1 21:38:20 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:20 BST 2017] Using Linode
[Wed Nov 1 21:38:22 BST 2017] Domain resource successfully deleted.
[Wed Nov 1 21:38:22 BST 2017] Verify finished, start to sign.
[Wed Nov 1 21:38:24 BST 2017] Cert success.
[Wed Nov 1 21:38:24 BST 2017] Your cert is in /var/db/acme/certs/bnix.club/bnix.club.cer
[Wed Nov 1 21:38:24 BST 2017] Your cert key is in /var/db/acme/certs/bnix.club/bnix.club.key
[Wed Nov 1 21:38:25 BST 2017] The intermediate CA cert is in /var/db/acme/certs/bnix.club/ca.cer
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there: /var/db/acme/certs/bnix.club/fullchain.cer
[Wed Nov 1 21:38:25 BST 2017] And the full chain certs is there: /var/db/acme/certs/bnix.club/fullchain.cer
The command tells acme.sh to issue a new certificate using Linode DNS entries for the list of sites (each address is preceded with a
-d). We now have a certificate sitting in the certs directory (as instructed in our account file). Now we just need to install the certificates.
4. Installing certificates
I’ve opted to allow the acme user to write to the directory where these certificates will be installed, these will then be readable by the www user that nginx runs as.
$ mkdir -p /usr/local/etc/ssl/bnix
$ acme.sh --install-cert -d bnix.club -d logs.bnix.club -d f.bnix.club -d www.bnix.club --key-file /usr/local/etc/ssl/bnix/privkey.pem --fullchain-file /usr/local/etc/ssl/bnix/privssl/bnix/fullchain.pem --reloadcmd "sleep 65 && touch /var/db/acme/.restart_nginx"
[Mon Oct 23 21:56:44 BST 2017] Installing key to:/usr/local/etc/ssl/bnix/privkey.pem
[Mon Oct 23 21:56:44 BST 2017] Installing full chain to:/usr/local/etc/ssl/bnix/fullchain.pem
[Mon Oct 23 21:56:44 BST 2017] Run reload cmd: sleep 65 && touch /var/db/acme/.restart_nginx
[Mon Oct 23 21:57:49 BST 2017] Reload success
You’ll notice the odd reload command there. I don’t want to give the acme user direct permission to restart nginx, so instead I wait for a time and create restart file. I think have the following script in my root users directory:
if [ -f /var/db/acme/.restart_nginx ]; then
service nginx forcereload
rm -rf /var/db/acme/.restart_nginx
The the following in
#minute hour mday month wday who command
* * * * * root /bin/sh /root/scripts/restart_nginx.sh
This means that every minute, root checks to see if that file is created, if it is then root restarts nginx and removes the file.
5. Renewing certificates
In order to renew certificates, the acme user must check once a day (using cron):
#minute hour mday month wday command
43 0 * * * /usr/local/sbin/acme.sh --cron --home "/var/db/acme/.acme.sh"
This will cause cron to run the acme.sh script every day at 00:43
Please note: Please choose another time other than 00:43 to spread the load on both Linode’s DNS servers and the Let’s Encrypt servers.
Thursday, 27 Jul 2017 21:43:35 · 2 minute read · Comments
Yesterday FreeBSD 11.1 was released. Once I got into work I started upgrading the VM I use for day to day activities. After creating a Boot Environment (BE) using beadm(1), and running the upgrade and install parts of freebsd-update(8), I rebooted into a newly activated BE only to find I had an 11.1 kernel, but a 11.0 userland…
I had no idea what I’d done wrong. After some questions on the FreeBSD Forums, I figured it out. Previously, I had only run the install process once; the install process needs to be done three times:
- Install the kernel
- Install userland
Usually, one would reboot between these actions. This is what I had attempted, but when I rebooted into my new BE I got the familiar message:
No updates are available to install.
Run '/usr/sbin/freebsd-update fetch' first.
So, what’s the correct procedure? Hunting across the Internet, I found many examples of how people thought it should be done. Most commonly was:
- Create a BE
- Activate the BE
- Reboot into the BE
- Fetch the upgrade
- Install the upgraded kernel
- Install upgraded userland
- Reboot (sometimes this seemed optional)
- Run install again for cleanup
Now, that seems like an awful lot of downtime. Back at Sun, BEs were introduced to me as a convenient roll-back method if things go wrong, and also to reduce downtime caused by upgrade (this was called Live Upgrade). Having all of this downtime did not appeal to me one bit.
So, how can we reduce downtime while using FreeBSD Boot Environments? Run all three installation tasks one after each other. Since we are upgrading an essentially dormant system (the BE hasn’t been activated and rebooted into yet) we don’t need to do the in-between reboots. Here’s my process:
Now you can keep the previous BE around until you’re happy everything is working and then destroy it.
I’ve not tried it, but I see no reason why this wouldn’t work for updates (e.g. 11.0-RELEASE-p0 to 11.0-RELEASE-p1) too.
Monday, 08 Feb 2016 18:49:32 · 3 minute read · Comments
Last week on Twitter I was promoting BSD on the desktop.
I got a small flurry of “likes” and “retweets” regarding a number of posts, and one (I think) real person even posted to #BSDdesktopWeek!
Nobody that I know of took my up on the offer of switching their everyday desktop to BSD for the week, but then I did only start promoting it the Friday before it started…
But why did I want to promote BSD on the desktop? Firstly, pretty much all the arguments we had a few years ago about why Linux was good for the desktop are pretty much true for BSD. Secondly, since I started using FreeBSD on a laptop at home I have realised just how well engineered the system is, how logical everything feels, and how great the community is.
With discovering the second point above, I have begun to switch my Linux servers to FreeBSD. With the power of Jails and ZFS, I not have one virtual machine running three different services (persistent IRC client, GitLab server, and ownCloud server) all segregated from each other and the whole thing is using less than 20GB of storage. Management is very easy and super configurable.
Since I came to FreeBSD as a server via the desktop, it is somewhat my hope that casual (techie) desktop users who use a BSD everyday on the desktop might choose a BSD for any future server requirements. Techie users would also bring with them a wealth of knowledge from other systems to improve the BSD in untold ways, and just playing the number game; more users would encourage software to become more portable and BSD friendly.
How was my week using BSD on the desktop? I must confess I missed OS X, and it didn’t help that I ran FreeBSD in VirtualBox on my Mac. I missed single click, Magic Mouse support, keyboard shortcuts (e.g. for generating a hyphen instead of a dash, or printing typographic quotation marks), and certain applications that only run on OS X (mainly Tweetbot and Reeder). Although some applications, like 1Password, would run well in Wine others did not (like Dropbox)—1Password uses Dropbox to sync files, so bummer!
But for general desktopy stuff, it worked really nicely. I happily did email, wrote most of this blog post, watched YouTube, did some perl script editing, some research, etc. It just worked as a functional desktop.
In the middle of the week, I found a really beautiful email client that is still in development called N1. I downloaded the source and attempted to compile, but no luck. Having opened a ticket, the developers are making an attempt to make sure this works on FreeBSD as well as the other supported systems—which I think is awesome of them!
Next year I think I’ll start promoting earlier, and perhaps try to draw in some support from other BSD users.
Tuesday, 29 Dec 2015 15:12:17 · 1 minute read · Comments
I play with my Raspberry Pi so rarely that I forget how to use my CP2102 serial converter to connect from my iMac or FreeBSD laptop to the Raspberry Pi, so I thought I’d write a blog post and then I’d have an easy place to go back to remember how…
Connecting the cables
Raspberry Pi Model B connected to USB—UART Adaptor. Click for bigger image.
On a Mac
- Acquire a CP2102 serial converter
- Download the driver (direct link to zip file)
- Attach Raspberry Pi using a USB 2.0 or older port (not USB 3)
- Open up Terminal.app and type:
screen -fn /dev/cu.SLAB_USBtoUART 115200
The fn flag disables flow-control
And you’re done!
- Acquire a CP2102 serial converter
Load uslcom.ko—either add it to loader.conf, compile it into the kernel, or as root do:
Attach Raspberry Pi via any USB port
Open up a terminal and as root (or via sudo), type:
cu -l /dev/ttyU0 -s 115200
And you’re done!
Tuesday, 04 Aug 2015 11:07:03 · 1 minute read · Comments
At work I deploy Red Hat Enterprise Linux VMs, for a variety of reasons, mostly by hand.
One of the steps I loath is setting up the network, it’s almost the only thing that truly requires manually tapping each character out. I have, however, learnt this bash one-liner such that I type it out without thinking:
Simply replace “eth0” with whatever interface you want the MAC address from and redirect it into the relevant ifcfg- file, edit said file with your favourite editor and prepend “HWADDR=” to the line with the MAC address on.
Tuesday, 21 Apr 2015 17:41:57 · 2 minute read · Comments
At home I use an iMac. When I’m away from my desk I use an iPhone 6. At work, I’m forced to deal with Windows (though use Linux/BSD VMs where possible).
I have a lot of software on my Mac, a number of apps are “document based” though manage those document internally. Some of these apps talk really nicely across many of the platforms I use (e.g. Evernote), other software works incredibly well within the Apple ecosystem (e.g. OmniFocus).
Lots of the software I use is “document based” in the real sense of the term, you click “save” and it spurts out a document that resides on your file system and these can be accessed on multiple platforms with various syncing services (e.g. ownCloud, or Dropbox).
Then there is iWork. The trio of apps can save a file to your desktop, or shove it in iCloud. It works really well on OS X, and on iOS, and a host of other platforms thanks to iCloud.com.
In 2013, Apple release iWork for iCloud beta. Users with supported browsers (officially Safari 6.0.3+, IE 9.0.8+, Chrome 27.0.1+) on (I presume) any platform can get access to any of their iWork documents which live in iCloud. How cool is that!? So now I can have a look over my budget on my work computer while trying to sort out car insurance in my lunch hour, or give a presentation put together on my Mac on someone else Linux workstation.
At WWDC 2014, Apple announced CloudKit. CloudKit gives developers some server side infrastructure so that they can think about programming the application, and not get caught up thinking about server logic. CloudKit provides:
- iCloud Authentication
- Asset Storage
- Database Storage (both general/public, and per user/private)
So, Apple already has apps (albeit in beta) on iCloud.com which make some of their core apps cross-platform. So too have they started developers thinking about the cloud.
Could this give developers a means to deploy their apps onto iCloud.com and get them out to users on platforms other than OS X and iOS? Users, by the way, that likely already have an Apple ID and given up their payment methods thanks to over a decade of iDevices and iTunes.
Sunday, 25 Jan 2015 22:38:06 · 3 minute read · Comments
I have mixed feelings about Google. On the one hand their search engine is next to ubiquitous, most browsers come with it as the default, and they also have some pretty excellent services. On the other hand, and perhaps this is my Apple synapse firing, I see them as a new enemy, and one that isn’t doing much to try and win me over.
In the past few years a few Google services that I have used (and some that I didn’t) have been axed. The likes of Reader, Latitude, XMPP, CalDAV (actually it appears they have revised this), and ActiveSync for rival platforms have all been killed off or rolled into Google’s social networking site: Google Plus. They’ve forked WebKit into a new project (Blink), to which there are pros and cons. We’re even seeing services and applications being built solely for Google Chrome (OK, “More browsers coming soon”…), which is damaging for an open internet.
My biggest issue here is email: if Google decided to turn off IMAP/SMTP I would be forced to use whatever app Google wanted me to use - be that the GMail app for iPhone, webmail, etc. I’d have to stop using what I wanted in order to keep using my email.
Although Google services are free, we all know that they aren’t really. Google (and others) collect data about you and sell it on - hence they can afford to keep making cool stuff and give it away for ‘free’.
The problem with using for a ‘free’ service is that you haven’t invested any money into the service, the provider owes nothing to you. The other problem is what data is being collected about you, who is that data being sold to, and what might that data tell them (rightly or wrongly!)
Given the above, I have decided the distance myself from Google. Some of the services that Google offer are still of use to me and there aren’t the same, ubiquitous, services available elsewhere (e.g. translate). I have cut away from Chrome, Gmail, and even Google Search.
Today I surf the web using Safari at home and Firefox at work, I use DuckDuckGo to search the web, and for email I’ve subscribed to Zoho which allows me to use my own domain name. My mapping needs are adequately met using Apple’s Maps. Apart from Google Translate, I’m not sure there is much I use Google for anymore. It’s been a fairly long road, and there is further I can push it, perhaps by taking what I’ve learnt to sacrifice and further distancing myself from companies that do/may gather personal data (anonymised or otherwise).
There are likely some usages that are either out of my control, or where I’d have to more deeply customise my computer setups, where various services pull/push data to/from Google behind the scenes, but for the time being I’m not going to let that worry me.
Bootnote: This blog post was drafted back in 2013 when I started to “deGoogle” my life. I’ve quickly zipped through and updated a few bits.