Dewey Defeats Truman!

Image for: Dewey Defeats Truman!

I’ve had the opportunity to do some democracy lately, for the French 2024 European Parliament elections, the French 2024 legislative elections and the British 2024 General Election. I didn’t actually participate in the British election, despite the legal right, for a few reasons:

  • Needing to re-register to vote every few years, which I apparently failed to do in time
  • Needing to apply for a postal vote every time
  • Relying on wishes and prayers that the postal vote arrives in time to make use of it, which it has failed to do so in the past
  • Having enough time to actually return the postal vote for it to be counted. Do you know what it costs to ship a ballot for next-day international delivery? Lots. I had to do that when voting in 2019
  • Ultimately, it doesn’t make a difference – I won’t be properly represented by the MP for Banbury when I live 3500 miles away

So… I didn’t vote in the British elections. I’ll come back to the results, but the elections I did vote in are the French ones.

Read more…

Saving my Wrists, Part 1: moving to an ergonomic mouse

Image for: Saving my Wrists, Part 1: moving to an ergonomic mouse

I’m full aware of my bad desk work ergonomics. I’m pretty good at sitting position – back straight, lumbar pillow, lap sloping forward, using my standing desk whenever the cat steals my chair, etc etc etc – but I use a normal gamer keyboard and mouse in a cramped position.

Late last year I started getting a bit of a twinge in my wrist, which reminded me of the time I properly flaunted with carpel tunnel at my first job doing an extensive intensive coding project on an ultra-small laptop. And with the upcoming 2024 refresh of the company’s wellness expense budget, I decided to start looking into more ergonomic pointing device options.

Read more…

Building a NAS

Image for: Building a NAS

The status quo

Image for: The status quo

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo

That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

Read more…

Retirement

Image for: Retirement

Apparently it’s nearly four years since I last posted to my blog. Which is, to a degree, the point here. My time, and priorities, have changed over the years. And this lead me to the decision that my available time and priorities in 2023 aren’t compatible with being a Debian or Ubuntu developer, and realistically, haven’t been for years. As of earlier this month, I quit as a Debian Developer and Ubuntu MOTU.

I think a lot of my blogging energy got absorbed by social media over the last decade, but with the collapse of Twitter and Reddit due to mismanagement, I’m trying to allocate more time for blog-based things instead. I may write up some of the things I’ve achieved at work (.NET 8 is now snapped for release Soon). I might even blog about work-adjacent controversial topics, like my changed feelings about the entire concept of distribution packages. But there’s time for that later. Maybe.

I’ll keep tagging vaguely FOSS related topics with the Debian and Ubuntu tags, which cause them to be aggregated in the Planet Debian/Ubuntu feeds (RSS, remember that from the before times?!) until an admin on those sites gets annoyed at the off-topic posting of an emeritus dev and deletes them.

But that’s where we are. Rather than ignore my distro obligations, I’ve admitted that I just don’t have the energy any more. Let someone less perpetually exhausted than me take over. And if they don’t, maybe that’s OK too.

My name is Jo and this is home now

Image for: My name is Jo and this is home now

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.

Read more…

Too many cores

Image for: Too many cores

Arming yourself

Image for: Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Image for: Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Image for: When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

Image for: Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

Image for: Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

Bootstrapping RHEL 8 support on mono-project.com

Image for: Bootstrapping RHEL 8 support on mono-project.com

Preamble

Image for: Preamble

On mono-project.com, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Image for: Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

Image for: RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

Image for: RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

On the topic of being part of a large and diverse community, including people whose identities you might not be able to personally understand

Image for: On the topic of being part of a large and diverse community, including people whose identities you m

EOL notification – Debian 7, Ubuntu 12.04

Image for: EOL notification – Debian 7, Ubuntu 12.04

Mono packages will no longer be built for these ancient distribution releases, starting from when we add Ubuntu 18.04 to the build matrix (likely early to mid April 2018).

Unless someone with a fat wallet screams, and throws a bunch of money at Azure, anyway.

Update on MonoDevelop Linux releases

Image for: Update on MonoDevelop Linux releases

Once upon a time, mono-project.com had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

Where’s the ARM builds?

https://github.com/mono/monodevelop/pull/3923

Where’s the ARM64 builds?

https://github.com/ericsink/SQLitePCL.raw/issues/199

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?

Yes 😬