Jul 27, 2017 · 2 minute read · Comments
Yesterday FreeBSD 11.1 was released. Once I got into work I started upgrading the VM I use for day to day activities. After creating a Boot Environment (BE) using beadm(1), and running the upgrade and install parts of freebsd-update(8), I rebooted into a newly activated BE only to find I had an 11.1 kernel, but a 11.0 userland…
I had no idea what I’d done wrong. After some questions on the FreeBSD Forums, I figured it out. Previously, I had only run the install process once; the install process needs to be done three times:
- Install the kernel
- Install userland
Usually, one would reboot between these actions. This is what I had attempted, but when I rebooted into my new BE I got the familiar message:
No updates are available to install.
Run '/usr/sbin/freebsd-update fetch' first.
So, what’s the correct procedure? Hunting across the Internet, I found many examples of how people thought it should be done. Most commonly was:
- Create a BE
- Activate the BE
- Reboot into the BE
- Fetch the upgrade
- Install the upgraded kernel
- Install upgraded userland
- Reboot (sometimes this seemed optional)
- Run install again for cleanup
Now, that seems like an awful lot of downtime. Back at Sun, BEs were introduced to me as a convenient roll-back method if things go wrong, and also to reduce downtime caused by upgrade (this was called Live Upgrade). Having all of this downtime did not appeal to me one bit.
So, how can we reduce downtime while using FreeBSD Boot Environments? Run all three installation tasks one after each other. Since we are upgrading an essentially dormant system (the BE hasn’t been activated and rebooted into yet) we don’t need to do the in-between reboots. Here’s my process:
Now you can keep the previous BE around until you’re happy everything is working and then destroy it.
I’ve not tried it, but I see no reason why this wouldn’t work for updates (e.g. 11.0-RELEASE-p0 to 11.0-RELEASE-p1) too.
Feb 8, 2016 · 3 minute read · Comments
Last week on Twitter I was promoting BSD on the desktop.
I got a small flurry of “likes” and “retweets” regarding a number of posts, and one (I think) real person even posted to #BSDdesktopWeek!
Nobody that I know of took my up on the offer of switching their everyday desktop to BSD for the week, but then I did only start promoting it the Friday before it started…
But why did I want to promote BSD on the desktop? Firstly, pretty much all the arguments we had a few years ago about why Linux was good for the desktop are pretty much true for BSD. Secondly, since I started using FreeBSD on a laptop at home I have realised just how well engineered the system is, how logical everything feels, and how great the community is.
With discovering the second point above, I have begun to switch my Linux servers to FreeBSD. With the power of Jails and ZFS, I not have one virtual machine running three different services (persistent IRC client, GitLab server, and ownCloud server) all segregated from each other and the whole thing is using less than 20GB of storage. Management is very easy and super configurable.
Since I came to FreeBSD as a server via the desktop, it is somewhat my hope that casual (techie) desktop users who use a BSD everyday on the desktop might choose a BSD for any future server requirements. Techie users would also bring with them a wealth of knowledge from other systems to improve the BSD in untold ways, and just playing the number game; more users would encourage software to become more portable and BSD friendly.
How was my week using BSD on the desktop? I must confess I missed OS X, and it didn’t help that I ran FreeBSD in VirtualBox on my Mac. I missed single click, Magic Mouse support, keyboard shortcuts (e.g. for generating a hyphen instead of a dash, or printing typographic quotation marks), and certain applications that only run on OS X (mainly Tweetbot and Reeder). Although some applications, like 1Password, would run well in Wine others did not (like Dropbox)—1Password uses Dropbox to sync files, so bummer!
But for general desktopy stuff, it worked really nicely. I happily did email, wrote most of this blog post, watched YouTube, did some perl script editing, some research, etc. It just worked as a functional desktop.
In the middle of the week, I found a really beautiful email client that is still in development called N1. I downloaded the source and attempted to compile, but no luck. Having opened a ticket, the developers are making an attempt to make sure this works on FreeBSD as well as the other supported systems—which I think is awesome of them!
Next year I think I’ll start promoting earlier, and perhaps try to draw in some support from other BSD users.
Dec 29, 2015 · 1 minute read · Comments
I play with my Raspberry Pi so rarely that I forget how to use my CP2102 serial converter to connect from my iMac or FreeBSD laptop to the Raspberry Pi, so I thought I’d write a blog post and then I’d have an easy place to go back to remember how…
Connecting the cables
Raspberry Pi Model B connected to USB—UART Adaptor. Click for bigger image.
On a Mac
- Aquire a CP2102 serial converter
- Download the driver (direct link to zip file)
- Attach Raspberry Pi using a USB 2.0 or older port (not USB 3)
- Open up Terminal.app and type:
screen -fn /dev/cu.SLAB_USBtoUART 115200
The fn flag disables flow-control
And you’re done!
- Aquire a CP2102 serial converter
Load uslcom.ko—either add it to loader.conf, compile it into the kernel, or as root do:
Attach Raspberry Pi via any USB port
Open up a terminal and as root (or via sudo), type:
cu -l /dev/ttyU0 -s 115200
And you’re done!
Aug 4, 2015 · 1 minute read · Comments
At work I deploy Red Hat Enterprise Linux VMs, for a variety of reasons, mostly by hand.
One of the steps I loath is setting up the network, it’s almost the only thing that truly requires manually tapping each character out. I have, however, learnt this bash one-liner such that I type it out without thinking:
Simply replace “eth0” with whatever interface you want the MAC address from and redirect it into the relevant ifcfg- file, edit said file with your favourite editor and prepend “HWADDR=” to the line with the MAC address on.
Apr 21, 2015 · 2 minute read · Comments
At home I use an iMac. When I’m away from my desk I use an iPhone 6. At work, I’m forced to deal with Windows (though use Linux/BSD VMs where possible).
I have a lot of software on my Mac, a number of apps are “document based” though manage those document internally. Some of these apps talk really nicely across many of the platforms I use (e.g. Evernote), other software works incredibly well within the Apple ecosystem (e.g. OmniFocus).
Lots of the software I use is “document based” in the real sense of the term, you click “save” and it spurts out a document that resides on your filesystem and these can be accessed on multiple platforms with various syncing services (e.g. ownCloud, or Dropbox).
Then there is iWork. The trio of apps can save a file to your desktop, or shove it in iCloud. It works really well on OS X, and on iOS, and a host of other platforms thanks to iCloud.com.
In 2013, Apple release iWork for iCloud beta. Users with supported browsers (officially Safari 6.0.3+, IE 9.0.8+, Chrome 27.0.1+) on (I presume) any platform can get access to any of their iWork documents which live in iCloud. How cool is that!? So now I can have a look over my budget on my work computer while trying to sort out car insurance in my lunch hour, or give a presentation put together on my Mac on someone else Linux workstation.
At WWDC 2014, Apple announced CloudKit. CloudKit gives developers some server side infrastructure so that they can think about programming the application, and not get caught up thinking about server logic. CloudKit provides:
- iCloud Authentication
- Asset Storage
- Database Storage (both general/public, and per user/private)
So, Apple already has apps (albeit in beta) on iCloud.com which make some of their core apps cross-platform. So too have they started developers thinking about the cloud.
Could this give developers a means to deploy their apps onto iCloud.com and get them out to users on platforms other than OS X and iOS? Users, by the way, that likely already have an Apple ID and given up their payment methods thanks to over a decade of iDevices and iTunes.
Mar 19, 2015 · 1 minute read · Comments
Over the past few years I’ve needed to recover files from various USB sticks and SD cards using my Mac. I’ve recently needed to do this again, and every time I’m asked I always forget the application I use! So I’m created a blog post so it’s easy to search for!
The application I use is TestDisk. It’s a command line application and can recover files from a number of different file systems. It’s fairly easy to use once you’ve read the website, and being CLI based it has the advantage that you can SSH into your friends machine and work with them over a phone/Skype/FaceTime call along with a shared screen or tmux session!
Jan 25, 2015 · 3 minute read · Comments
I have mixed feelings about Google. On the one hand their search engine is next to ubiquitous, most browsers come with it as the default, and they also have some pretty excellent services. On the other hand, and perhaps this is my Apple synapse firing, I see them as a new enemy, and one that isn’t doing much to try and win me over.
In the past few years a few Google services that I have used (and some that I didn’t) have been axed. The likes of Reader, Latitude, XMPP, CalDAV (actually it appears they have revised this), and ActiveSync for rival platforms have all been killed off or rolled into Google’s social networking site: Google Plus. They’ve forked WebKit into a new project (Blink), to which there are pros and cons. We’re even seeing services and applications being built solely for Google Chrome (OK, “More browsers coming soon”…), which is damaging for an open internet.
My biggest issue here is email: if Google decided to turn off IMAP/SMTP I would be forced to use whatever app Google wanted me to use - be that the GMail app for iPhone, webmail, etc. I’d have to stop using what I wanted in order to keep using my email.
Although Google services are free, we all know that they aren’t really. Google (and others) collect data about you and sell it on - hence they can afford to keep making cool stuff and give it away for ‘free’.
The problem with using for a ‘free’ service is that you haven’t invested any money into the service, the provider owes nothing to you. The other problem is what data is being collected about you, who is that data being sold to, and what might that data tell them (rightly or wrongly!)
Given the above, I have decided the distance myself from Google. Some of the services that Google offer are still of use to me and there aren’t the same, ubiquitous, services available elsewhere (e.g. translate). I have cut away from Chrome, Gmail, and even Google Search.
Today I surf the web using Safari at home and Firefox at work, I use DuckDuckGo to search the web, and for email I’ve subscribed to Zoho which allows me to use my own domain name. My mapping needs are adequately met using Apple’s Maps. Apart from Google Translate, I’m not sure there is much I use Google for anymore. It’s been a fairly long road, and there is further I can push it, perhaps by taking what I’ve learnt to sacrifice and further distancing myself from companies that do/may gather personal data (anonymised or otherwise).
There are likely some usages that are either out of my control, or where I’d have to more deeply customise my computer setups, where various services pull/push data to/from Google behind the scenes, but for the time being I’m not going to let that worry me.
Bootnote: This blog post was drafted back in 2013 when I started to “deGoogle” my life. I’ve quickly zipped through and updated a few bits.
Jan 5, 2015 · 1 minute read · Comments
Taken from Small Labs Inc., here’s a histogram of my most used commands:
cd 289 ############################################################
ls 282 ###########################################################
git 158 #################################
hi 76 ################
vim 75 ################
for 72 ###############
ssh 70 ###############
sudo 52 ###########
java 48 ##########
rm 46 ##########
cat 38 ########
brew 38 ########
man 35 ########
less 34 ########
find 26 ######
scp 23 #####
ps 23 #####
“hi 17 ####
lorem 16 ####
top 13 ###
Have a try at making your own with the code from the Small Labs Inc. website!
May 30, 2014 · 4 minute read · Comments
Technology moves so fast, doesn’t it? I mean who would want such a battered looking laptop?
It doesn’t look like much, does it?
This first generation MacBook has been in my possession for eight years today. Setting me back just a little over £900 at the time for 1GB RAM, a dual core (32-bit) CPU running at 2GHz, and OS X 10.4 Tiger.
I bought the MacBook for two main reasons
- I wanted a laptop
- I wanted a Mac
The third point - which is what made me justify it at the time - was that I would be taking it to University with me. My only real requirement at the time was that it last, and it has - mostly.
The problems I’ve had
Overheating (~6 months in) - a fault with the original design (or so I was told) was that one of the heat sensor wires based too close to the CPU. When the CPU warmed up enough, it could melt the wire, this could trigger the logic board to think it was overheating and just shut everything down. With advice from an Apple Retailer and certified engineer, I was able to get a little fix done for free under warranty, which was good as the logic boards weren’t available un the UK yet.
Dead hard drive (~18months in) - a horribly inevitable situation, though I didn’t think it would strike this soon. Apple eventually noted that there was a problem, but it was after I had fixed it myself, at a cost of £80.
Melting power cable (~24 months in) - I noticed which on a great to Scotland that the power kept current out while charging. After some inspection it turned out that the cable on the MacBook side of the power block had melted through its casing, and was shorting! As the linked article describes, I fixed the issue myself, though in 2012 it started to melt right up hear the MacBook, and I ended up throwing it away and using the one I bought on eBay.
Overheating (~5 years in) - Simple problem, the fan died, and this caused the MacBook to overheat. The fan cost a couple of quid off eBay and it was down for only a couple of days.
Dead wifi (~7 years in) - I think due to excessive heat, the wifi chip died :( Now I have to use an external WiFi dongle…
Obsolescence - Since 2011 I have been unable to update the OS past 10.6 (Snow Leopard) due to the 32-bit CPU. For a long time this didn’t hinder me, and it’s only in the last year or two that I’ve come across some 64-bit only apps, or apps that rely on an OS newer than 10.6.
But despite all of this…
It has been an awesome laptop. Costing around £1000 over its lifetime (currently working out at about £125/year) I still use it most days, and not just for hopping around the web - just this past christmas I was firing up Windows 7 virtual machines with VirtualBox to get train logs analysed (which took some fairly serious number crunching). The apps that I need to run do, and for features that are missing (e.g. bookmarks in iCloud) I use other services (e.g. XMarks).
I had said to myself, eight years ago, that this laptop would have to last me seven years - the ‘arbitrary’ length of time an old-time Mac user told me a Mac would last - and it has outdone itself. True it’s not been without problems, but it is still here to tell the tales (unlike some other laptops, both cheaper and more expensive). When the MacBook Pro with Retina display was announced, I longed for one and I still do, but due to the fact that these things aren’t cheap, and this little fellow is trooping along, I can’t really justify it now.
Long live Faegilath (forgotten elvish meaning)!
May 11, 2014 · 1 minute read · Comments
Well it’s been a long time since I’ve been here. After a quick update it all looks very shiny again. What happened to 2013!? Mostly full of work, friends, and reading in the ruins of Aberystwyth castle.
Since the last post in 2012, I have moved from Aberystwyth to Frome in Somerset. I no longer work with ERTMS, or indeed on the railway at all, for the last month and a half I have been an employee at Ntegra where I have been working with one of their clients to configure many virtual machines running Red Hat Enterprise Linux.
All of the ERTMS stuff seemed quite secret, and so bespoke at the moment that few people would find the ins and outs interesting. Now I’m back working with computers, I hope to post here more - mainly with things I’ve learnt so I have an easy place to come back to, but hopefully others will find such information useful.