Planet NoName e.V.

25. May 2017

sECuREs Webseite

Auto-opening portal pages with NetworkManager

Modern desktop environments like GNOME offer UI for this, but if you’re using a more bare-bones window manager, you’re on your own. This article outlines how to get a login page opened in your browser when you’re behind a portal.

If your distribution does not automatically enable it (Fedora does, Debian doesn’t), you’ll first need to enable connectivity checking in NetworkManager:

# cat >> /etc/NetworkManager/NetworkManager.conf <<'EOT'
[connectivity]
uri=http://network-test.debian.org/nm
EOT

Then, add a dispatcher hook which will open a browser when NetworkManager detects you’re behind a portal. Note that the username must be hard-coded because the hook runs as root, so this hook will not work as-is on multi-user systems. The URL I’m using is an always-http URL, also used by Android (I expect it to be somewhat stable). Portals will redirect you to their login page when you open this URL.

# cat > /etc/NetworkManager/dispatcher.d/99portal <<EOT
#!/bin/bash

[ "$CONNECTIVITY_STATE" = "PORTAL" ] || exit 0

USER=michael
USERHOME=$(eval echo "~$USER")
export XAUTHORITY="$USERHOME/.Xauthority"
export DISPLAY=":0"
su $USER -c "x-www-browser http://www.gstatic.com/generate_204"
EOT

by Michael Stapelberg at 25. May 2017 09:37:17

26. April 2017

mfhs Blog

look4bas: Small script to search for Gaussian basis sets

In the past couple of days I hacked together a small Python script, which searches through EMSL basis set exchange library of Gaussian basis sets on the commandline.

Unlike the webinterface it allows to use grep-like regular expressions to search the names and descriptions of the basis sets. Of course limiting the selection to those basis sets which have definitions for a specific subset of elements is possible as well. All matching basis sets can be downloaded at once. Right now only downloading the basis set files in Gaussian94 format is implemented, however.

The code, further information and some examples can be found on https://github.com/mfherbst/look4bas.

by mfh at 26. April 2017 00:00:21

24. April 2017

RaumZeitLabor

Exkursion in den Luxor Filmpalast Bensheim

An nur wenigen Orten liegen Technik und Popkultur so dicht zusammen wie im Kino. Und das gilt ganz besonders für den Luxor Filmpalast Bensheim. Neben einem komplett im Star Wars Design gehaltenen Kinosaal, gibt es dort auch ein Haifischbecken und verschiedenste popkulturelle Exponate, zum Beispiel einen im Back-to-the-Future Stil umgebauten DeLorean, zu sehen.

Grund genug für uns, sich das mal genauer anzusehen.

Wir werden uns am Samstag, den 20. Mai, um 16:00 Uhr im Eingangsbereich des Luxor Filmpalast Bensheim treffen und haben dann Luxor 7 ganz für uns allein. Dabei handelt es sich um den oben erwähnten Kinosaal im Star Wars Look, der außerdem noch mit einem eigenen Loungebereich versehen ist.

Um 16:30 Uhr machen wir dann das, was man in einem Kino normalerweise so tut: Wir werden einen Film schauen. Exklusiv für uns wird es eine Sondervorführung von „Hackers“, dem Meisterwerk aus dem Jahr 1995, geben. HACK THE PLANET!

Nach dem Film bekommen wir von einem technisch versierten Mitarbeiter eine Führung hinter die Kulissen des Kinos und haben, als besonderes Extra, die Gelegenheit das inoffizielle „Museum“, das sich ebenfalls im Kinogebäude befindet, zu besichtigen. Dabei handelt es sich um eine umfangreiche Privatsammlung an Actionfiguren und anderen Merchandiseartikeln, die im Lauf von 35 Jahren zusammengestellt wurde.

Die Veranstaltung wird voraussichtlich gegen 19:30 Uhr enden.

Die Teilnahme an der Exkursion kostet 25€ pro Person. Wie üblich ist die Mitgliedschaft im RaumZeitLabor e.V. keine Voraussetzung um an der Veranstaltung teilnehmen zu können.

Die maximale Teilnehmerzahl ist beschränkt, daher ist eine verbindliche Anmeldung erforderlich. Um euch anzumelden schreibt mir eine Mail mit dem Betreff „Kino Exkursion“.

Die Teilnehmer an der Exkursion sollten sich, wie immer, untereinander abstimmen und möglichst Fahrgemeinschaften bilden. Weitere Informationen zur Exkursion gehen zu gegebener Zeit per Mail an die angemeldeten Teilnehmer.

by blabber at 24. April 2017 00:00:00

16. April 2017

sECuREs Webseite

HomeMatic re-implementation

A while ago, I got myself a bunch of HomeMatic home automation gear (valve drives, temperature and humidity sensors, power switches). The gear itself works reasonably well, but I found the management software painfully lacking. Hence, I re-implemented my own management software. In this article, I’ll describe my method, in the hope that others will pick up a few nifty tricks to make future re-implementation projects easier.

Motivation

When buying my HomeMatic devices, I decided to use the wide-spread HomeMatic Central Control Unit (CCU2). This embedded device runs the proprietary rfd wireless daemon, which offers an XML-RPC interface, used by the web interface.

I find the CCU2’s web interface really unpleasant. It doesn’t look modern, it takes ages to load, and it doesn’t indicate progress. I frequently find myself clicking on a button, only to realize that my previous click was still not processed entirely, and then my current click ends up on a different element that I intended to click. Ugh.

More importantly, even if you avoid the CCU2’s web interface altogether and only want to extract sensor values, you’ll come to realize that the device crashes every few weeks. Due to memory pressure, the rfd is killed and doesn’t come back. As a band-aid, I wrote a watchdog cronjob which would just reboot the device. I also reported the bug to the vendor, but never got a reply.

When I tried to update the software to a more recent version, things went so wrong that I decided to downgrade and not touch the device anymore. This is not a good state to be in, so eventually I started my project to replace the device entirely. The replacement is hmgo, a central control unit implemented in Go, deployed to a Raspberry Pi running gokrazy. The radio module I’m using is HomeMatic’s HM-MOD-RPI-PCB, which is connected to a serial port, much like in the CCU2 itself.

Preparation: gather and visualize traces

In order to compare the behavior of the CCU2 stock firmware against my software, I wanted to capture some traces. Looking at what goes on over the air (or on the wire) is also a good learning opportunity to understand the protocol.

  1. I wrote a Wireshark dissector (see contrib/homematic.lua). It is a quick & dirty hack, does not properly dissect everything, but it works for the majority of packets. This step alone will make the upcoming work so much easier, because you won’t need to decode packets in your head (and make mistakes!) so often.
  2. I captured traffic from the working system. Conveniently, the CCU2 allows SSH'ing in as root after setting a password. Once logged in, I used lsof and ls /proc/$(pidof rfd)/fd to identify the file descriptors which rfd uses to talk to the serial port. Then, I used strace -e read=7,write=7 -f -p $(pidof rfd) to get hex dumps of each read/write. These hex dumps can directly be fed into text2pcap and can be analyzed with Wireshark.
  3. I also wrote a little Perl script to extract and convert packet hex dumps from homegear debug logs to text2pcap-compatible format. More on that in a bit.

Preparation: research

Then, I gathered as much material as possible. I found and ended up using the following resources (in order of frequency):

  1. homegear source
  2. FHEM source
  3. homegear presentation
  4. hmcfgusb source
  5. FHEM wiki

Preparation: lab setup

Next, I got the hardware to work with a known-good software. I set up homegear on a Raspberry Pi, which took a few hours of compilation time because there were no pre-built Debian stretch arm64 binaries. This step established that the hardware itself was working fine.

Also, I got myself another set of traces from homegear, which is always useful.

Implementation

Now the actual implementation can begin. Note that up until this point, I hadn’t written a single line of actual program code. I defined a few milestones which I wanted to reach:

  1. Talk to the serial port.
  2. Successfully initialize the HM-MOD-RPI-PCB
  3. Receive any BidCoS broadcast packet
  4. Decode any BidCoS broadcast packet (can largely be done in a unit test)
  5. Talk to an already-paired device (re-using the address/key from my homegear setup)
  6. Configure an already-paired device
  7. Pair a device

To make the implementation process more convenient, I changed the compilation command of my editor to cross-compile the program, scp it to the Raspberry Pi and run it there. This allowed me to test my code with one keyboard shortcut, and I love quick feedback.

Retrospective

The entire project took a few weeks of my spare time. If I had taken some time off of work, I’m confident I could have implemented it in about a week of full-time work.

Consciously doing research, preparation and milestone planning was helpful. It gave me good sense of my progress and achievable goals.

As I’ve learnt previously, investing in tools pays off quickly, even for one-off projects like this one. I’d recommend everyone who’s doing protocol-related work to invest some time in learning to use Wireshark and writing custom Wireshark dissectors.

by Michael Stapelberg at 16. April 2017 10:20:00

25. March 2017

sECuREs Webseite

Review: Turris Omnia (with Fiber7)

The Turris Omnia is an open source (an OpenWrt fork) open hardware internet router created and supported by nic.cz, the registry for the Czech Republic. It’s the successor to their Project Turris, but with better specs.

I was made aware of the Turris Omnia while it was being crowd-funded on Indiegogo and decided to support the cause. I’ve been using OpenWrt on my wireless infrastructure for many years now, and finding a decent router with enough RAM and storage for the occasional experiment used to not be an easy task. As a result, I had been using a very stable but also very old tp-link WDR4300 for over 4 years.

For the last 2 years, I had been using a Ubiquiti EdgeRouter Lite (Erlite-3) with a tp-link MC220L media converter and the aforementioned tp-link WDR4300 access point. Back then, that was one of the few setups which delivered 1 Gigabit in passively cooled (quiet!) devices running open source software.

With its hardware specs, the Turris Omnia promised to be a big upgrade over my old setup: the project pages described the router to be capable of processing 1 Gigabit, equipped with a 802.11ac WiFi card and having an SFP slot for the fiber transceiver I use to get online. Without sacrificing performance, the Turris Omnia would replace 3 devices (media converter, router, WiFi access point), which yields nice space and power savings.

Performance

Wired performance

As expected, the Turris Omnia delivers a full Gigabit. A typical speedtest.net result is 2ms ping, 935 Mbps down, 936 Mbps up. Speeds displayed by wget and other tools max out at the same values as with the Ubiquiti EdgeRouter Lite. Latency to well-connected targets such as Google remains at 0.7ms.

WiFi performance

I did a few quick tests on speedtest.net with the devices I had available, and here are the results:

Client Down (WDR4300) Down (Omnia) Up
ThinkPad X1 Carbon 2015 35 Mbps 470 Mbps 143 Mbps
MacBook Pro 13" Retina 2014 127 Mbps 540 Mbps 270 Mbps
iPhone SE 226 Mbps 227 Mbps

Compatibility (software/setup)

OpenWrt’s default setup at the time when I set up this router was the most pleasant surprise of all: using the Turris Omnia with fiber7 is literally Plug & Play. After opening the router’s wizard page in your web browser, you literally need to click “Next” a few times and you’re online with IPv4 and IPv6 configured in a way that will be good enough for many people.

I realize this is due to Fiber7 using “just” DHCPv4 and DHCPv6 without requiring credentials, but man is this nice to see. Open source/hardware devices which just work out of the box are not something I’m used to :-).

One thing I ended up changing, though: in the default setup (at the time when I tried it), hostnames sent to the DHCP server would not automatically resolve locally via DNS. I.e., I could not use ping beast without any further setup to send ping probes to my gaming computer. To fix that, for now one needs to disable KnotDNS in favor of dnsmasq’s built-in DNS resolver. This will leave you without KnotDNS’s DNSSEC support. But I prefer ease of use in this particular trade-off.

Compatibility (hardware)

Unfortunately, the SFPs which Fiber7 sells/requires are not immediately compatible with the Turris Omnia. If I understand correctly, the issue is related to speed negotiation.

After months of discussion in the Turris forum and not much success on fixing the issue, Fiber7 now offers to disable speed negotiation on your port if you send them an email. Afterwards, your SFPs will work in media converters such as the tp-link MC220L and the Turris Omnia.

The downside is that debugging issues with your port becomes harder, as Fiber7 will no longer be able to see whether your device correctly negotiates speed, the link will just always be forced to “up”.

Updates

The Turris Omnia’s automated updates are a big differentiator: without having to do anything, the Turris Omnia will install new software versions automatically. This feature alone will likely improve your home network’s security and this feature alone justifies buying the router in my eyes.

Of course, automated upgrades constitute a certain risk: if the new software version or the upgrade process has a bug, things might break. This happened once to me in the 6 months that I have been using this router. I still haven’t seen a statement from the Turris developers about this particular breakage — I wish they would communicate more.

Since you can easily restore your configuration from a backup, I’m not too worried about this. In case you’re travelling and really need to access your devices at home, I would recommend to temporarily disable the automated upgrades, though.

Product Excellence

One feature I love is that the brightness of the LEDs can be controlled, to the point where you can turn them off entirely. It sounds trivial, but the result is that I don’t have to apply tape to this device to dim its LEDs. To not disturb watching movies, playing games or having guests crash on the living room couch, I can turn the LEDs off and only turn them back on when I actually need to look at them for something — in practice, that’s never, because the router just works.

Recovering the software after horribly messing up an experiment is pretty easy: when holding the reset button for a certain number of seconds, the device enters a mode where a new firmware file is flashed to the device from a plugged-in USB memory stick. What’s really nice is that the mode is indicated by the color of the LEDs, saving you other device’s tedious counting which I tend to always start at the wrong second. This is a very good compromise between saving cost and pleasing developers.

The Turris Omnia has a serial port readily accessible via a pin header that’s reachable after opening the device. I definitely expected an easily accessible serial port in a device which targets open source/hardware enthusiasts. In fact, I have two ideas to make that serial port even better:

  1. Label the pins on the board — that doesn’t cost a cent more and spares some annoying googling for a page which isn’t highly ranked in the search results. Sparing some googling is a good move for an internet router: chances are that accessing the internet will be inconvenient while you’re debugging your router.
  2. Expose the serial port via USB-C. The HP 2530-48G switch does this: you don’t have to connect a USB2serial to a pin header yourself, rather you just plug in a USB cable which you’ll probably carry anyway. Super convenient!

Conclusion

tl;dr: if you can afford it, buy it!

I’m very satisfied with the Turris Omnia. I like how it is both open source and open hardware. I rarely want to do development with my main internet router, but when I do, the Turris Omnia makes it pleasant. The performance is as good as advertised, and I have not noticed any stability problems with neither the router itself nor the WiFi.

I outlined above how the next revision of the router could be made ever so slightly more perfect, and I described the issues I ran into (SFP compatibility and an update breaking my non-standard setup). If these aren’t deal-breakers to you, which sounds unlikely, you should definitely consider the Turris Omnia!

by Michael Stapelberg at 25. March 2017 09:40:00

25. February 2017

RaumZeitLabor

[Ausflugszeit] RZL trifft NTM

NTM

WERTHER RaumZeitLaboranten!

Es gibt KAIN Entkommen von den AusflĂźgen! Der ABSCHIEDSWALZER von der letzten Exkursion ist noch nicht so lange ausgetanzt, da geht es auch schon wieder rund.

Dank der MACHT DES SCHICKSALS steht als nächstes Ausflugsziel das Nationaltheater Mannheim auf dem Plan.

Am Samstag, den 18. März 2017, haben wir – DONNERWETTER - GANZ FAMOS – ab 15.00 Uhr die MÜglichkeit einen Blick hinter die Kulissen zu werfen. Wer schon immer etwas mehr ßber die Haus- und LICHTtechnik sowie das Audio- und Videosystem des Nationaltheaters erfahren wollte, sollte sich das Datum mit einem KREIDEKREIS im Kalender markieren.

Falls ihr teilnehmen wollt, noch CARMEN oder OTHELLO mitbringen mĂśchtet, SO WIRD’S GEMACHT: Schreibt mir bitte bis 10. März 2017, 13.37 Uhr eine Mail mit den Namen aller Mitkommenden (Bitte schreibt auch dazu, wenn ihr SchĂźler oder BETTELSTUDENT seid!) [đŸŽ­].

Damit nicht TAUSEND SĂœSSE BEINCHEN durchs Theater trampeln, ist die Teilnehmerzahl auf 15 begrenzt.

Der Eintritt ist fĂźr euch kostenlos - damit auch der DER GEIZIGE RITTER mitkommen kann!

Es grĂźĂŸt
DIE FLEDERMAUS


[đŸŽ­] Keine Mail, kein NTM!

by flederrattie at 25. February 2017 00:00:00

19. February 2017

Insantity Industries

The beauty of Archlinux

Recently I have been asked (again) why to use Arch Linux when choosing a linux distribution. Because my fellow inhabitants of the IRC channel affected are always very pleased by the wall of text awaiting them after such a question when they return to the keyboard, I decided to write down my thoughts on that matter so that in the future I can simply link to this post.

I use Archlinux as my main distribution and will lay out why I personally like it. Afterwards I will also line out why you might not want to use Arch Linux. This is merely a collection of arguments and is certainly neither exhaustive nor do I claim to be unbiased in this matter, as I personally like the distro a lot. Ultimately you will have to decide on your own which distribution is best suited for you.

This post is about the specifics of Arch Linux in the family of Linux distributions. It is not about comparing Linux to Windows or OSX or other operating system families. But enough of the chitchat, let’s get to the matter at hand:

Why to use Arch Linux

Simplicity

Arch is developed under the KISS-principle (“Keep It Simple, Stupid”). This does not mean that Arch is necessarily easy but the system and its inherent structure is simple in many various aspects:

Few abstraction layers

Arch generally has very few abstraction layers. Instead of having a tool to autoconfigure things for you, for example, configuration is usually done manually in Arch. This does not mean that something like Network Manager is not available in the package repos, but that the basic system configuration is usually done without additional tooling by editing the respective configuration files. This means that one has a good feeling for what’s going on in one’s own system and how it is structured, because one has made those configurations oneself.

This also includes no automagical merging or changes when an update requires a change in a configuration file. The way it is handled in Arch is that pacman, the distribution’s packacke manager notices if a configfile was changed from the packet’s default and will then install the changed version in the packacke with the extension .pacnew while telling you about this so that you can merge those files on your own after finishing all the upgrades.

Easy, yet powerful system to build packages

Speaking of packages, it is very easy to build packages in Arch. You simply write your shell commands that you would use to compile/install/whatever the software you want to use into a special function in a so called PKGBUILD-file, which is effectively something like a buildscript, call makepkg and you end up with a package to install your software package-manager-controlled. The packet itself is nothing more than a mirroring of the file system tree relevant for that package plus a little metadata for the packet manager.

It is also very simple to rebuild packages from the official repositories, e.g. if one wants to have features compiled in that the default configuration in Arch has disabled (think of a kernel with custom options). The Arch Building System, ABS for short, provides you with all the sources you need for that purpose. Edit one or two simple files and you are ready to go.

No unnecessary packet splitting

Software in Arch is usually packaged as it is organized by the upstream-developers. This means that if upstream provides one software, this software will end up being one package in Arch and is not splitted into multiple packages, like for splitting out plugins or data files or something like that. This reduces the amount of packages present and increases overview.

Getting rid of old

If a technology used in Arch turns out to have better alternatives and the developers decide to transition to the new technology, then this transfer is done in one strong cut. This means no legacy-tech lying around in the system, but the new technology will be fully integrated without having to make compromises because old technology must still be supported.

One example for this is the sysVinit-to-systemd-transition: instead of using systemd’s capabilities to also be compatible with sysVinit the transition was made en bloc. This makes the integration of systemd very smooth and without compromises to backwards compatibility, which simplifys usage a lot as one gets rid of a lot of cornercases that would have otherwise needed to be considered.

Getting rid of old in general

Generally, getting rid of old tech is one of the major things I like about arch. If there is something new that is better than the old solutions or will eventually be adopted, those things will consequently adopted.

Another example is the restructuring of the file system hirarchy, namely the unification of /usr/bin, /usr/sbin, /usr/lib, /bin, /sbin directories /lib into /usr/bin and /usr/lib. Several distributions have announced to perform that unificaton eventually, but instead of doing this successively one after the other (which other distributions even spread out over several major-releases) and dealing with this issue over an extended period of time, the Arch Devs decided to unify all of it en bloc, being done with it once and for all.

Also, instead of trying to be convenient on updates and providing compatibility when something changes, the Arch package manager will tell you that manual intervention is required, you do the necessary changes and your system is migrated to the new, compatibility-layer-free new configuration. Such interventions are, however, rather rare and are prominently announced on the arch-announce-mailinglist, the news feed on the website and the website itself, so just subscribe somewhere and you get notified if manual intervention is required on the next update and what to do. This also means, though, that in Arch you should not do automated updates, but manual ones and read the upgrade log, just in case something comes up.

Documentation

Arch has excellent documentation. The Arch-wiki, especially, but not limited to, the english one is usually an excellent, very detailed source of information to the point that I tend to look up problems that I have in other distributions in the Arch-wiki, because chances are good that the problem is covered in the Arch-documentation. This reaches the point where for me the Arch-wiki sometimes even surpassed project-internal documentation in its usefulness.

Learning

Arch is a quite hand-curated hand-curated distro compared to other distributions out there. One is exposed to the internals of the system as a lot of things in Arch are simply done by hand instead of having a tool that does that (see above). And while I would not claim that Arch is necessarily easy (although simple), in combination with the excellent documentation it can be a very fruitful learning experience. However, it is learning the hard way, so if that is your goal, Arch is in my opinion very suited for that, but you have to be willing to (depending on your previous experience) climb a steep mountain. Nevertheless, I can fully recommend Arch for that purpose if you are willing to do. Especially as, once you are at the top, you get to enjoy all the other benefits mentioned.

Software and Updates

Rolling Release

Arch is not a fixed-point-release-based distro as many others, but a rolling release distro. This means instead of having a new release with new features every couple of weeks, months or years, Arch ships new software more or less as soon as it gets released. Therefore instead of having certain ponts in time where a lot of new stuff is coming in (possibly a lot, possibly requiring migrating to new software versions), new stuff is coming piece by piece spread out in time. This is primary a thing of personal preference, but I find it more convenient to have regular feature updates (99.5% of cases without the need to intervene at all) instead of having to apply and test large updates with a new point-release.

Bleeding Edge

Arch’s rolling-release-model not just ensures that softare comes in spread out over time, but it also allows for software to be shipped once it is released, and Arch makes use of that. This means that Arch is one of the distros with usually the most recent versions of software, getting new stuff as soon as there is a new upstream release.

Vanilla Software

Another thing I really like about software in Arch is that it is shipped as upstream intended, which means the software compiled is usually precisely the code that upstream released, there are, no distribution-specific patches applied. Exceptions to this rule are extremely rare, for example a stability fix in the kernel that was already upstream but not yet released. That patch lived for a short while in the linux-package in Arch, but was removed again once upstream shipped the fix.

AUR

The official package repositories are filled with most of the software one needs for daily business. However, if a software is not in those repositories, one can easily package it oneself, as mentioned above, and a lot of people might have already done exactly that. There exists the Arch User Repository, which is a collection of the PKGBUILDs used for that, along the lines of “hey, I needed that software and packaged it for myself, anyone has a need for the PKGBUILD-file I used?” So if one needs a software that is not in the official repositories, chances are fairly large that someone else already did package that software and one can use that buildscript as a base for the own package. This even applies to a lot of niche-packages you might never have expected to have been packaged at all.

Things to consider

I have pointed out why I like Arch Linux very much and why I definitely can recommend it.

However, be aware of a couple of things:

  • If you just want a distribution that is fire-and-forget and just works out of the box without manual configuration, Arch is not what you are looking for.
  • If you want a rocksolid distribution that is guaranteed to never break in any minor detail, then you are better served with something like a Debian (and its stable-release), as the fact that Arch ships bleeding edge software bears a certain remaining risk of running into bugs no one has run into before and that slipped through upstream’s and the Arch’s testing-procedures.

If none of those are an issue for you (or negligible compared to the advantages), Arch might be a distro very well suited for you.

by Jonas Große Sundrup at 19. February 2017 00:00:00

The beauty of Arch Linux

Recently I have been asked (again) why to use Arch Linux when choosing a linux distribution. Because my fellow inhabitants of the IRC channel affected are always very pleased by the wall of text awaiting them after such a question when they return to the keyboard, I decided to write down my thoughts on that matter so that in the future I can simply link to this post.

I use Archlinux as my main distribution and will lay out why I personally like it. Afterwards I will also line out why you might not want to use Arch Linux. This is merely a collection of arguments and is certainly neither exhaustive nor do I claim to be unbiased in this matter, as I personally like the distro a lot. Ultimately you will have to decide on your own which distribution is best suited for you.

This post is about the specifics of Arch Linux in the family of Linux distributions. It is not about comparing Linux to Windows or OSX or other operating system families. But enough of the chitchat, let’s get to the matter at hand:

Why to use Arch Linux

Simplicity

Arch is developed under the KISS-principle (“Keep It Simple, Stupid”). This does not mean that Arch is necessarily easy but the system and its inherent structure is simple in many various aspects:

Few abstraction layers

Arch generally has very few abstraction layers. Instead of having a tool to autoconfigure things for you, for example, configuration is usually done manually in Arch. This does not mean that something like Network Manager is not available in the package repos, but that the basic system configuration is usually done without additional tooling by editing the respective configuration files. This means that one has a good feeling for what’s going on in one’s own system and how it is structured, because one has made those configurations oneself.

This also includes no automagical merging or changes when an update requires a change in a configuration file. The way it is handled in Arch is that pacman, the distribution’s packacke manager notices if a configfile was changed from the packet’s default and will then install the changed version in the packacke with the extension .pacnew while telling you about this so that you can merge those files on your own after finishing all the upgrades.

Easy, yet powerful system to build packages

Speaking of packages, it is very easy to build packages in Arch. You simply write your shell commands that you would use to compile/install/whatever the software you want to use into a special function in a so called PKGBUILD-file, which is effectively something like a buildscript, call makepkg and you end up with a package to install your software package-manager-controlled. The packet itself is nothing more than a mirroring of the file system tree relevant for that package plus a little metadata for the packet manager.

It is also very simple to rebuild packages from the official repositories, e.g. if one wants to have features compiled in that the default configuration in Arch has disabled (think of a kernel with custom options). The Arch Building System, ABS for short, provides you with all the sources you need for that purpose. Edit one or two simple files and you are ready to go.

No unnecessary packet splitting

Software in Arch is usually packaged as it is organized by the upstream-developers. This means that if upstream provides one software, this software will end up being one package in Arch and is not splitted into multiple packages, like for splitting out plugins or data files or something like that. This reduces the amount of packages present and increases overview.

Getting rid of old

If a technology used in Arch turns out to have better alternatives and the developers decide to transition to the new technology, then this transfer is done in one strong cut. This means no legacy-tech lying around in the system, but the new technology will be fully integrated without having to make compromises because old technology must still be supported.

One example for this is the sysVinit-to-systemd-transition: instead of using systemd’s capabilities to also be compatible with sysVinit the transition was made en bloc. This makes the integration of systemd very smooth and without compromises to backwards compatibility, which simplifys usage a lot as one gets rid of a lot of cornercases that would have otherwise needed to be considered.

Getting rid of old in general

Generally, getting rid of old tech is one of the major things I like about arch. If there is something new that is better than the old solutions or will eventually be adopted, those things will consequently adopted.

Another example is the restructuring of the file system hirarchy, namely the unification of /usr/bin, /usr/sbin, /usr/lib, /bin, /sbin directories /lib into /usr/bin and /usr/lib. Several distributions have announced to perform that unificaton eventually, but instead of doing this successively one after the other (which other distributions even spread out over several major-releases) and dealing with this issue over an extended period of time, the Arch Devs decided to unify all of it en bloc, being done with it once and for all.

Also, instead of trying to be convenient on updates and providing compatibility when something changes, the Arch package manager will tell you that manual intervention is required, you do the necessary changes and your system is migrated to the new, compatibility-layer-free new configuration. Such interventions are, however, rather rare and are prominently announced on the arch-announce-mailinglist, the news feed on the website and the website itself, so just subscribe somewhere and you get notified if manual intervention is required on the next update and what to do. This also means, though, that in Arch you should not do automated updates, but manual ones and read the upgrade log, just in case something comes up.

Documentation

Arch has excellent documentation. The Arch-wiki, especially, but not limited to, the english one is usually an excellent, very detailed source of information to the point that I tend to look up problems that I have in other distributions in the Arch-wiki, because chances are good that the problem is covered in the Arch-documentation. This reaches the point where for me the Arch-wiki sometimes even surpassed project-internal documentation in its usefulness.

Learning

Arch is a quite hand-curated hand-curated distro compared to other distributions out there. One is exposed to the internals of the system as a lot of things in Arch are simply done by hand instead of having a tool that does that (see above). And while I would not claim that Arch is necessarily easy (although simple), in combination with the excellent documentation it can be a very fruitful learning experience. However, it is learning the hard way, so if that is your goal, Arch is in my opinion very suited for that, but you have to be willing to (depending on your previous experience) climb a steep mountain. Nevertheless, I can fully recommend Arch for that purpose if you are willing to do. Especially as, once you are at the top, you get to enjoy all the other benefits mentioned.

Software and Updates

Rolling Release

Arch is not a fixed-point-release-based distro as many others, but a rolling release distro. This means instead of having a new release with new features every couple of weeks, months or years, Arch ships new software more or less as soon as it gets released. Therefore instead of having certain ponts in time where a lot of new stuff is coming in (possibly a lot, possibly requiring migrating to new software versions), new stuff is coming piece by piece spread out in time. This is primary a thing of personal preference, but I find it more convenient to have regular feature updates (99.5% of cases without the need to intervene at all) instead of having to apply and test large updates with a new point-release.

Bleeding Edge

Arch’s rolling-release-model not just ensures that softare comes in spread out over time, but it also allows for software to be shipped once it is released, and Arch makes use of that. This means that Arch is one of the distros with usually the most recent versions of software, getting new stuff as soon as there is a new upstream release.

Vanilla Software

Another thing I really like about software in Arch is that it is shipped as upstream intended, which means the software compiled is usually precisely the code that upstream released, there are, no distribution-specific patches applied. Exceptions to this rule are extremely rare, for example a stability fix in the kernel that was already upstream but not yet released. That patch lived for a short while in the linux-package in Arch, but was removed again once upstream shipped the fix.

AUR

The official package repositories are filled with most of the software one needs for daily business. However, if a software is not in those repositories, one can easily package it oneself, as mentioned above, and a lot of people might have already done exactly that. There exists the Arch User Repository, which is a collection of the PKGBUILDs used for that, along the lines of “hey, I needed that software and packaged it for myself, anyone has a need for the PKGBUILD-file I used?” So if one needs a software that is not in the official repositories, chances are fairly large that someone else already did package that software and one can use that buildscript as a base for the own package. This even applies to a lot of niche-packages you might never have expected to have been packaged at all.

Things to consider

I have pointed out why I like Arch Linux very much and why I definitely can recommend it.

However, be aware of a couple of things:

  • If you just want a distribution that is fire-and-forget and just works out of the box without manual configuration, Arch is not what you are looking for.
  • If you want a rocksolid distribution that is guaranteed to never break in any minor detail, then you are better served with something like a Debian (and its stable-release), as the fact that Arch ships bleeding edge software bears a certain remaining risk of running into bugs no one has run into before and that slipped through upstream’s and the Arch’s testing-procedures.

If none of those are an issue for you (or negligible compared to the advantages), Arch might be a distro very well suited for you.

by Jonas Große Sundrup at 19. February 2017 00:00:00

07. February 2017

RaumZeitLabor

Hackertour Holzheizkraftwerk Heidelberg

An alle Freunde von Alliterationen und Kraftwerken!

Am Mittwoch, den 22.02.2017 haben wir ab 16.00 Uhr die MĂśglichkeit das Holz-Heizkraftwerk Heidelberg zu besichtigen.

“Strom ohne Atom” - Die Stadtwerke Heidelberg haben sich bis zu diesem Jahr den Komplettausstieg aus der Atomkraft vorgenommen. Eine MaĂŸnahme dazu war der Bau des Holzheizkraftwerks im Pfaffengrund. Die Energie wird zu 90% aus GrĂźnschnitt und sogenanntem Landschaftspflegematerial gewonnen.

Wer wissen will, woraus die restlichen 10% bestehen, was das Kraftwerk an Megawattstunden Strom und Wärme abwirft und was das Ganze mit der Heidelberger Bahnstadt zu tun hat, sollte sich die Tour nicht entgehen lassen.

Wie immer kĂśnnt ihr selbstverständlich auch mitkommen, wenn ihr (noch) kein Mitglied im RaumZeitLabor e.V. seid. Ich bitte allerdings um RĂźckmeldung per Mail[đŸ”Ľ] an mich bis zum 15.02.2017, da die Tour aus SicherheitsgrĂźnden auf 15 Plätze begrenzt ist.



[đŸ”Ľ] Keine Mail, kein Holz, kein Heiz!

by flederrattie at 07. February 2017 00:00:00

28. January 2017

sECuREs Webseite

Atomically writing files in Go

Writing files is simple, but correctly writing files atomically in a performant way might not be as trivial as one might think. Here’s an extensively commented function to atomically write compressed files (taken from debiman, the software behind manpages.debian.org):

package main

import (
    "bufio"
    "compress/gzip"
    "io"
    "io/ioutil"
    "log"
    "os"
    "path/filepath"
)

func tempDir(dest string) string {
    tempdir := os.Getenv("TMPDIR")
    if tempdir == "" {
        // Convenient for development: decreases the chance that we
        // cannot move files due to TMPDIR being on a different file
        // system than dest.
        tempdir = filepath.Dir(dest)
    }
    return tempdir
}

func writeAtomically(dest string, compress bool, write func(w io.Writer) error) (err error) {
    f, err := ioutil.TempFile(tempDir(dest), "atomic-")
    if err != nil {
        return err
    }
    defer func() {
        // Clean up (best effort) in case we are returning with an error:
        if err != nil {
            // Prevent file descriptor leaks.
            f.Close()
            // Remove the tempfile to avoid filling up the file system.
            os.Remove(f.Name())
        }
    }()

    // Use a buffered writer to minimize write(2) syscalls.
    bufw := bufio.NewWriter(f)

    w := io.Writer(bufw)
    var gzipw *gzip.Writer
    if compress {
        // NOTE: gzip’s decompression phase takes the same time,
        // regardless of compression level. Hence, we invest the
        // maximum CPU time once to achieve the best compression.
        gzipw, err = gzip.NewWriterLevel(bufw, gzip.BestCompression)
        if err != nil {
            return err
        }
        defer gzipw.Close()
        w = gzipw
    }

    if err := write(w); err != nil {
        return err
    }

    if compress {
        if err := gzipw.Close(); err != nil {
            return err
        }
    }

    if err := bufw.Flush(); err != nil {
        return err
    }

    // Chmod the file world-readable (ioutil.TempFile creates files with
    // mode 0600) before renaming.
    if err := f.Chmod(0644); err != nil {
        return err
    }

    if err := f.Close(); err != nil {
        return err
    }

    return os.Rename(f.Name(), dest)
}

func main() {
    if err := writeAtomically("demo.txt.gz", true, func(w io.Writer) error {
        _, err := w.Write([]byte("demo"))
        return err
    }); err != nil {
        log.Fatal(err)
    }
}

rsync(1) will fail when it lacks permission to read files. Hence, if you are synchronizing a repository of files while updating it, you’ll need to set TMPDIR to point to a directory on the same file system (for rename(2) to work) which is not covered by your rsync(1) invocation.

When calling writeAtomically repeatedly to create lots of small files, you’ll notice that creating gzip.Writers is actually rather expensive. Modifying the function to re-use the same gzip.Writer yielded a significant decrease in wall-clock time.

Of course, if you’re looking for maximum write performance (as opposed to minimum resulting file size), you should use a different gzip level than gzip.BestCompression.

by Michael Stapelberg at 28. January 2017 21:29:00

Webfont loading with FOUT

For manpages.debian.org, I looked at loading webfonts. I considered the following scenarios:

# local? cached? Network Expected Observed
1 Yes / / perfect render perfect render
2 No Yes / perfect render perfect render
3 No No Fast FOUT FOIT
4 No No Slow FOUT some FOUT, some FOIT

Scenario #1 and #2 are easy: the font is available, so if we inline the CSS into the HTML page, the browser should be able to render the page perfectly on the first try. Unfortunately, the more common scenarios are #3 and #4, since many people reach manpages.debian.org through a link to an individual manpage.

The default browser behavior, if we just specify a webfont using @font-face in our stylesheet, is the Flash Of Invisible Text (FOIT), i.e. the page loads, but text remains hidden until fonts are loaded. On a good 3G connection, this means users will have to wait 500ms to see the page content, which is far too long for my taste. The user experience becomes especially jarring when the font doesn’t actually load — users will just see a spinner and leave the site frustrated.

In comparison, when using the Flash Of Unstyled Text (FOUT), loading time is 250ms, i.e. cut in half! Sure, you have a page reflow after the fonts have actually loaded, but at least users will immediately see the content.

In an ideal world

In an ideal world, I could just specify font-display: swap in my @font-face definition, but the css-font-display spec is unofficial and not available in any browser yet.

Toolbox

To achieve FOUT when necessary and perfect rendering when possible, we make use of the following tools:

CSS font loading API
The font loading API allows us to request a font load before the DOM is even created, i.e. before the browser would normally start processing font loads. Since we can specify a callback to be run when the font is loaded, we can apply the style as soon as possible — if the font was cached or is installed locally, this means before the DOM is first created, resulting in a perfect render.
This API is available in Firefox, Chrome, Safari, Opera, but notably not in IE or Edge.
single round-trip asynchronous font loading
For the remaining browsers, we’ll need to load the fonts and only apply them after they have been loaded. The best way to do this is to create a stylesheet which contains the inlined font files as base64 data and the corresponding styles to enable them. Once the browser loaded the file, it will apply the font, which at that point is guaranteed to be present.
In order to load that stylesheet without blocking the page load, we’ll use Preloading.
Native <link rel="preload"> support is available only in Chrome and Opera, but there are polyfills for the remaining browsers.
Note that a downside of this technique is that we don’t distinguish between WOFF2 and WOFF fonts, we always just serve WOFF. This maximizes compatibility, but means that WOFF2-capable browsers will have to download more bytes than they had to if we offered WOFF2.

Combination

The following flow chart illustrates how to react to different situations:

Putting it all together

Example fonts stylesheet: (base64 data removed for readability)

@font-face {
  font-family: 'Inconsolata';
  src: local('Inconsolata'),
       url("data:application/x-font-woff;charset=utf-8;base64,[…]") format("woff");
}

@font-face {
  font-family: 'Roboto';
  font-style: normal;
  font-weight: 400;
  src: local('Roboto'),
       local('Roboto Regular'),
       local('Roboto-Regular'),
       url("data:application/x-font-woff;charset=utf-8;base64,[…]") format("woff");
}

body {
  font-family: 'Roboto', sans-serif;
}

pre, code {
  font-family: 'Inconsolata', monospace;
}

Example document:

<head>
<style type="text/css">
/* Defined, but not referenced */

@font-face {
  font-family: 'Inconsolata';
  src: local('Inconsolata'),
       url(/Inconsolata.woff2) format('woff2'),
       url(/Inconsolata.woff) format('woff');
}   

@font-face {
  font-family: 'Roboto';
  font-style: normal;
  font-weight: 400;
  src: local('Roboto'),
       local('Roboto Regular'),
       local('Roboto-Regular'),
       url(/Roboto-Regular.woff2) format('woff2'),
       url(/Roboto-Regular.woff) format('woff');
}   
</style>
<script type="text/javascript">
if (!!document['fonts']) {
        /* font loading API supported */
        var r = "body { font-family: 'Roboto', sans-serif; }";
        var i = "pre, code { font-family: 'Inconsolata', monospace; }";
        var l = function(m) {
                if (!document.body) {
                        /* cached, before DOM is built */
                        document.write("<style>"+m+"</style>");
                } else {
                        /* uncached, after DOM is built */
                        document.body.innerHTML+="<style>"+m+"</style>";
                }
        };
        new FontFace('Roboto',
                     "local('Roboto'), " +
                     "local('Roboto Regular'), " +
                     "local('Roboto-Regular'), " +
                     "url(/Roboto-Regular.woff2) format('woff2'), " +
                     "url(/Roboto-Regular.woff) format('woff')")
                .load().then(function() { l(r); });
        new FontFace('Inconsolata',
                     "local('Inconsolata'), " +
                     "url(/Inconsolata.woff2) format('woff2'), " +
                     "url(/Inconsolata.woff) format('woff')")
                .load().then(function() { l(i); });
} else {
        var l = document.createElement('link');
        l.rel = 'preload';
        l.href = '/fonts-woff.css';
        l.as = 'style';
        l.onload = function() { this.rel = 'stylesheet'; };
        document.head.appendChild(l);
}
</script>
<noscript>
  <style type="text/css">
    body { font-family: 'Roboto', sans-serif; }
    pre, code { font-family: 'Inconsolata', monospace; }
  </style>
</noscript>
</head>
<body>

[…content…]

<script type="text/javascript">
/* inlined loadCSS.js and cssrelpreload.js from
   https://github.com/filamentgroup/loadCSS/tree/master/src */
(function(a){"use strict";var b=function(b,c,d){var e=a.document;var f=e.createElement("link");var g;if(c)g=c;else{var h=(e.body||e.getElementsByTagName("head")[0]).childNodes;g=h[h.length-1];}var i=e.styleSheets;f.rel="stylesheet";f.href=b;f.media="only x";function j(a){if(e.body)return a();setTimeout(function(){j(a);});}j(function(){g.parentNode.insertBefore(f,(c?g:g.nextSibling));});var k=function(a){var b=f.href;var c=i.length;while(c--)if(i[c].href===b)return a();setTimeout(function(){k(a);});};function l(){if(f.addEventListener)f.removeEventListener("load",l);f.media=d||"all";}if(f.addEventListener)f.addEventListener("load",l);f.onloadcssdefined=k;k(l);return f;};if(typeof exports!=="undefined")exports.loadCSS=b;else a.loadCSS=b;}(typeof global!=="undefined"?global:this));
(function(a){if(!a.loadCSS)return;var b=loadCSS.relpreload={};b.support=function(){try{return a.document.createElement("link").relList.supports("preload");}catch(b){return false;}};b.poly=function(){var b=a.document.getElementsByTagName("link");for(var c=0;c<b.length;c++){var d=b[c];if(d.rel==="preload"&&d.getAttribute("as")==="style"){a.loadCSS(d.href,d);d.rel=null;}}};if(!b.support()){b.poly();var c=a.setInterval(b.poly,300);if(a.addEventListener)a.addEventListener("load",function(){a.clearInterval(c);});if(a.attachEvent)a.attachEvent("onload",function(){a.clearInterval(c);});}}(this));
</script>
</body>

by Michael Stapelberg at 28. January 2017 15:57:00

31. December 2016

RaumZeitLabor

2k16 interhackerspaces xmas swap

Seit einigen Jahren ist es guter Brauch, dass sich die Besten unter den Hackerspaces zu Weihnachten gegenseitig beschenken. In der Manier des Wichtelns (manchmal auch des Schrott-Wichtelns) werden also am Jahresende Pakete verschickt. Organisiert wird die gesamte Aktion über das Wiki des Hackerspaces Frack aus den Niederlanden.

Dieses Jahr sind wir mit einer kompletten Eigenentwicklung an den Start gegangen und haben als “Leading Hackerspace in Box-Making-Technology” verschlossene Holz-Boxen verschickt, die sich erst nach dem Lösen mehrerer Rätsel elektronisch öffnen und den begehrten Mannheimer Schoko-Wasserturm freigeben.

Mit der artgerechten Gestaltung der Boxen haben wir externe, befreundete Künstlerinnen beauftragt und wir glauben, dass sich das Ergebnis durchaus sehen lassen kann.

Die Nerds des Hacklabor haben eine solche Box von uns erhalten und das ganze Projekt in Aktion kannst Du also hier auf youtube begutachten. Auf eine Videobotschaft der anderen mit Boxen beglückten Hackerspaces warten wir übrigens noch…

xmas 2k16

by s1lvester at 31. December 2016 00:00:00

12. December 2016

RaumZeitLabor

Impressionen der Exkursion zur RNV

Am 9.12. waren wir zu Besuch bei der Rhein-Neckar-Verkehr GmbH und haben uns von der StraĂŸenbahnwaschanlage bis zum Leitstand alles angesehen, was nicht abgeschlossen war. Hier gibt es ein paar Impressionen von der FĂźhrung. Mehr Hackertours sind in Planung

RNV Impressionen

by tabascoeye at 12. December 2016 00:00:00

21. November 2016

sECuREs Webseite

Gigabit NAS (running CoreOS)

tl;dr: I upgraded from a qnap TS-119P to a custom HTPC-like network storage solution. This article outlines what my original reasoning was for the qnap TS-119P, what I learnt, and with what solution precisely I replaced the qnap.

A little over two years ago, I gave a (German) presentation about my network storage setup (see video or slides). Given that video isn’t a great consumption format when you’re pressed on time, and given that a number of my readers might not speak German, I’ll recap the most important points:

  • I reduced the scope of the setup to storing daily backups and providing media files via CIFS.
    I have come to prefer numerous smaller setups over one gigantic setup which offers everything (think a Turris Omnia acting as a router, mail server, network storage, etc.). Smaller setups can be debugged, upgraded or rebuilt in less time. Time-boxing activities has become very important to me as I work full time: if I can’t imagine finishing the activity within 1 or 2 hours, I usually don’t get started on it at all unless I’m on vacation.
  • Requirements: FOSS, relatively cheap, relatively low power usage, relatively high redundancy level.
    I’m looking not to spend a large amount of money on hardware. Whenever I do spend a lot, I feel pressured to get the most out of my purchase and use the hardware for many years. However, I find it more satisfying to be able to upgrade more frequently — just like the update this article is describing :).
    With regards to redundancy, I’m not content with being able to rebuild the system within a couple of days after a failure occurs. Instead, I want to be able to trivially switch to a replacement system within minutes. This requirement results in the decision to run 2 separate qnap NAS appliances with 1 hard disk each (instead of e.g. a RAID-1 setup).
    The decision to go with qnap as a vendor came from the fact that their devices are pretty well supported in the Debian ecosystem: there is a network installer for it, the custom hardware is supported by the qcontrol tool and one can build a serial console connector.
  • The remainder of the points is largely related to software, and hence not relevant for this article, as I’m keeping the software largely the same (aside from the surrounding operating system).

What did not work well

Even a well-supported embedded device like the qnap TS-119P requires too much effort to be set up:

  1. Setting up the network installer is cumbersome.
  2. I contributed patches to qcontrol for setting the wake on LAN configuration on the qnap TS-119P’s controller board and for systemd support in qcontrol.
  3. I contributed my first ever patch to the Linux kernel for wake on LAN support for the Marvell MV643xx series chips.
  4. I ended up lobbying Debian to enable the CONFIG_MARVELL_PHY kernel option, while personally running a custom kernel.

On the hardware side, to get any insight into what the device is doing, your only input/output option is a serial console. To get easy access to that serial console, you need to solder an adapter cable for the somewhat non-standard header which they use.

All of this contributed to my impression that upgrading the device would be equally involved. Logically speaking, I know that this is unlikely since my patches are upstream in the Linux kernel and in Debian. Nevertheless, I couldn’t help but feel like it would be hard. As a result, I have not upgraded my device ever since I got it working, i.e. more than two years ago.

The take-away is that I now try hard to:

  1. use standard hardware which fits well into my landscape
  2. use a software setup which has as few non-standard modifications as possible and which automatically updates itself.

What I would like to improve

One continuous pain point was how slow the qnap TS-119P was with regards to writing data to disk. The slowness was largely caused by full-disk encryption. The device’s hardware accelerator turned out to be useless with cryptsetup-luks’s comparatively small hard-coded 4K block size, resulting in about 6 to 10 MB/s of throughput.

This resulted in me downloading files onto the SSD in my workstation and then transferring them to the network storage. Doing these downloads in a faster environment circumvents my somewhat irrational fears about the files becoming unavailable while you are downloading them, and allows me to take pleasure in how fast I’m able to download things :).

The take-away is that any new solution should be as quick as my workstation and network, i.e. it should be able to write files to disk with gigabit speed.

What I can get rid of

While dramaqueen and autowake worked well in principle, they turned out to no longer be very useful in my environment: I switched from a dedicated OpenELEC-based media center box to using emby and a Chromecast. emby is also nice to access remotely, e.g. when watching series at a friend’s place, or watching movies while on vacation or business trips somewhere. Hence, my storage solution needs to be running 24/7 — no more automated shutdowns and wake-up procedures.

What worked well

Reducing the scope drastically in terms of software setup complexity paid off. If it weren’t for that, I probably would not have been able to complete this upgrade within a few mornings/evenings and would likely have pushed this project out for a long time.

The new hardware

I researched the following components back in March, but then put the project on hold due to time constraints and to allow myself some more time to think about it. I finally ordered the components in August, and they still ranked best with regards to cost / performance ratio and fulfilling my requirements.

Price Type Article
43.49 CHF Case SILVERSTONE Sugo SST-SG05BB-LITE
60.40 CHF Mainboard ASROCK AM1H-ITX
38.99 CHF CPU AMD Athlon 5350 (supports AES-NI)
20.99 CHF Cooler Alpine M1-Passive
32.80 CHF RAM KINGSTON ValueRAM, 8.0GB (KVR16N11/8)
28.70 CHF PSU Toshiba PA3822U-1ACA, PA3822E-1AC3, 19V 2,37A
(To19V_2.37A_5.5x2.5)
0 CHF System disk (OCZ-AGILITY3 60G)
You’ll need to do your own research.
Currently, my system uses 5GB of space,
so chose the smallest SSD you can find.
225.37 CHF total sum

For the qnap TS-119P, I paid 226 CHF, so my new solution is a tad more expensive. However, I had the OCZ-AGILITY3 lying around from a warranty exchange, so bottomline, I paid less than what I had paid for the previous solution.

I haven’t measured this myself, but according to the internet, the setups have the following power consumption (without disks):

  • The qnap TS-119P uses ≈7W, i.e. ≈60 CHF/year for electricity.
  • The AM1H-ITX / Athlon 5350 setup uses ≈20W, i.e. ≈77 CHF/year for electricity.

In terms of fitting well into my hardware landscape, this system does a much better job than the qnap. Instead of having to solder a custom serial port adapter, I can simply connect a USB keyboard and an HDMI or DisplayPort monitor and I’m done.

Further, any linux distribution can easily be installed from a bootable USB drive, without the need for any custom tools or ports.

Full-disk encryption performance

# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       338687 iterations per second
PBKDF2-sha256     228348 iterations per second
PBKDF2-sha512     138847 iterations per second
PBKDF2-ripemd160  246375 iterations per second
PBKDF2-whirlpool   84891 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b   468.8 MiB/s  1040.9 MiB/s
     aes-cbc   256b   366.4 MiB/s   885.8 MiB/s
     aes-xts   256b   850.8 MiB/s   843.9 MiB/s
     aes-xts   512b   725.0 MiB/s   740.6 MiB/s

Network performance

As the old qnap TS-119P would only sustain gigabit performance using IPv4 (with TCP checksum offloading), I was naturally relieved to see that the new solution can send packets at gigabit line rate using both protocols, IPv4 and IPv6. I ran the following tests inside a docker container (docker run --net=host -t -i debian:latest):

# nc 10.0.0.76 3000 | dd of=/dev/null bs=5M
0+55391 records in
0+55391 records out
416109464 bytes (416 MB) copied, 3.55637 s, 117 MB/s
# nc 2001:db8::225:8aff:fe5d:53a9 3000 | dd of=/dev/null bs=5M
0+275127 records in
0+275127 records out
629802884 bytes (630 MB) copied, 5.45907 s, 115 MB/s

The CPU was >90% idle using netcat-traditional.
The CPU was >70% idle using netcat-openbsd.

End-to-end throughput

Reading/writing to a disk which uses cryptsetup-luks full-disk encryption with the aes-cbc-essiv:sha256 cipher, these are the resulting speeds:

Reading a file from a CIFS mount works at gigabit throughput, without any tuning:

311+1 records in
311+1 records out
1632440260 bytes (1,6 GB) copied, 13,9396 s, 117 MB/s

Writing works at almost gigabit throughput:

1160+1 records in
1160+1 records out
6082701588 bytes (6,1 GB) copied, 58,0304 s, 105 MB/s

During rsync+ssh backups, the CPU is never 100% maxed out, and data is sent to the NAS at 65 MB/s.

The new software setup

Given that I wanted to use a software setup which has as few non-standard modifications as possible and automatically updates itself, I was curious to see if I could carry this to the extreme by using CoreOS.

If you’re unfamiliar with it, CoreOS is a Linux distribution which is intended to be used in clusters on individual nodes. It updates itself automatically (using omaha, Google’s updater behind ChromeOS) and comes as a largely read-only system without a package manager. You deploy software using Docker and configure the setup using cloud-config.

I have been successfully using CoreOS for a few years in virtual machine setups such as the one for the RobustIRC network.

The cloud-config file I came up with can be found in Appendix A. You can pass it to the CoreOS installer’s -c flag. Personally, I installed CoreOS by booting from a grml live linux USB key, then running the CoreOS installer.

In order to update the cloud-config file after installing CoreOS, you can use the following commands:

midna $ scp cloud-config.storage.yaml core@10.252:/tmp/
storage $ sudo cp /tmp/cloud-config.storage.yaml /var/lib/coreos-install/user_data
storage $ sudo coreos-cloudinit --from-file=/var/lib/coreos-install/user_data

Dockerfiles: rrsync and samba

Since neither rsync nor samba directly provide Docker containers, I had to whip up the following small Dockerfiles which install the latest versions from Debian jessie.

Of course, this means that I need to rebuild these two containers regularly, but I also can easily roll them back in case an update broke, and the rest of the operating system updates independently of these mission-critical pieces.

Eventually, I’m looking to enable auto-build for these Docker containers so that the Docker hub rebuilds the images when necessary, and the updates are picked up either manually when time-critical, or automatically by virtue of CoreOS rebooting to update itself.

FROM debian:jessie
RUN apt-get update \
  && apt-get install -y rsync \
  && gunzip -c /usr/share/doc/rsync/scripts/rrsync.gz > /usr/bin/rrsync \
  && chmod +x /usr/bin/rrsync
ENTRYPOINT ["/usr/bin/rrsync"]
FROM debian:jessie
RUN apt-get update && apt-get install -y samba
ADD smb.conf /etc/samba/smb.conf
EXPOSE 137 138 139 445
CMD ["/usr/sbin/smbd", "-FS"]

Appendix A: cloud-config

#cloud-config

hostname: "storage"

ssh_authorized_keys:
  - ssh-rsa AAAAB… michael@midna

write_files:
  - path: /etc/ssl/certs/r.zekjur.net.crt
    content: |
      -----BEGIN CERTIFICATE-----
      MIIDYjCCAko…
      -----END CERTIFICATE-----
  - path: /var/lib/ip6tables/rules-save
    permissions: 0644
    owner: root:root
    content: |
      # Generated by ip6tables-save v1.4.14 on Fri Aug 26 19:57:51 2016
      *filter
      :INPUT DROP [0:0]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [0:0]
      -A INPUT -p ipv6-icmp -m comment --comment "IPv6 needs ICMPv6 to work" -j ACCEPT
      -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment "Allow packets for outgoing connections" -j ACCEPT
      -A INPUT -s fe80::/10 -d fe80::/10 -m comment --comment "Allow link-local traffic" -j ACCEPT
      -A INPUT -s 2001:db8::/32 -m comment --comment "local traffic" -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "SSH" -j ACCEPT
      COMMIT
      # Completed on Fri Aug 26 19:57:51 2016
  - path: /root/.ssh/authorized_keys
    permissions: 0600
    owner: root:root
    content: |
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/midna:/srv/backup/midna stapelberg/rsync /srv/backup/midna" ssh-rsa AAAAB… root@midna
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/scan2drive:/srv/backup/scan2drive stapelberg/rsync /srv/backup/scan2drive" ssh-rsa AAAAB… root@scan2drive
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/alp.zekjur.net:/srv/backup/alp.zekjur.net stapelberg/rsync /srv/backup/alp.zekjur.net" ssh-rsa AAAAB… root@alp

coreos:
  update:
    reboot-strategy: "reboot"
  locksmith:
    window_start: 01:00 # UTC, i.e. 02:00 CET or 03:00 CEST
    window_length: 2h  # i.e. until 03:00 CET or 04:00 CEST
  units:
    - name: ip6tables-restore.service
      enable: true

    - name: 00-enp2s0.network
      runtime: true
      content: |
        [Match]
        Name=enp2s0

        [Network]
        DNS=10.0.0.1
        Address=10.0.0.252/24
        Gateway=10.0.0.1
        IPv6Token=0:0:0:0:10::252

    - name: systemd-networkd-wait-online.service
      command: start
      drop-ins:
        - name: "10-interface.conf"
          content: |
            [Service]
            ExecStart=
            ExecStart=/usr/lib/systemd/systemd-networkd-wait-online \
	      --interface=enp2s0

    - name: unlock.service
      command: start
      content: |
        [Unit]
        Description=unlock hard drive
        Wants=network.target
        After=systemd-networkd-wait-online.service
        Before=samba.service
        
        [Service]
        Type=oneshot
        RemainAfterExit=yes
        # Wait until the host is actually reachable.
        ExecStart=/bin/sh -c "c=0; while [ $c -lt 5 ]; do \
	    /bin/ping6 -n -c 1 r.zekjur.net && break; \
	    c=$((c+1)); \
	    sleep 1; \
	  done"
        ExecStart=/bin/sh -c "(echo -n my_local_secret && \
	  wget \
	    --retry-connrefused \
	    --ca-directory=/dev/null \
	    --ca-certificate=/etc/ssl/certs/r.zekjur.net.crt \
	    -qO- https://r.zekjur.net/sdb2_crypt) \
	  | /sbin/cryptsetup --key-file=- luksOpen /dev/sdb2 sdb2_crypt"
        ExecStart=/bin/mount /dev/mapper/sdb2_crypt /srv

    - name: samba.service
      command: start
      content: |
        [Unit]
        Description=samba server
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull stapelberg/samba:latest

        # Set up samba users (cannot be done in the (public) Dockerfile
        # because users/passwords are sensitive information).
        ExecStartPre=-/usr/bin/docker kill smb
        ExecStartPre=-/usr/bin/docker rm smb
        ExecStartPre=-/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep stapelberg/samba \
	  adduser --quiet --disabled-password --gecos "" --uid 29901 michael
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared
        ExecStartPre=/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep smb-prepared \
	  /bin/sh -c "echo my_password | tee - | smbpasswd -a -s michael"
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared

        ExecStart=/usr/bin/docker run \
          -p 137:137 \
          -p 138:138 \
          -p 139:139 \
          -p 445:445 \
          --tmpfs=/run \
          -v /srv/data:/srv/data \
          --name smb \
          -t \
          smb-prepared \
            /usr/sbin/smbd -FS

    - name: emby.service
      command: start
      content: |
        [Unit]
        Description=emby
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull emby/embyserver

        ExecStart=/usr/bin/docker run \
          --rm \
          --net=host \
          -v /srv/data/movies:/srv/data/movies:ro \
          -v /srv/data/series:/srv/data/series:ro \
          -v /srv/emby:/config \
          emby/embyserver

by Michael Stapelberg at 21. November 2016 16:12:00

16. November 2016

sur5r/blog

Another instance of AC_DEFINE being undefined

While trying to build a backport if i3 4.13 for Debian wheezy (currently oldstable), I stumbled across the following problem:


   dh_autoreconf -O--parallel -O--builddirectory=build
configure.ac:132: error: possibly undefined macro: AC_DEFINE
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
dh_autoreconf: autoreconf -f -i returned exit code 1


Digging around the net I found nothing that helped. So I tried building that package manually. After some trail and error, I noticed that autoreconf as of wheezy seems to ignore AC_CONFIG_MACRO_DIR. Calling autoreconf with -i -f -I m4 solved this.

I finally added this to debian/rules:

override_dh_autoreconf:
        dh_autoreconf autoreconf -- -f -i -I m4

by sur5r (nospam@example.com) at 16. November 2016 00:49:23

29. September 2016

RaumZeitLabor

††† An alle Cheesusfreaks: Heiliges Gefresse mit Käsus Christus †††

KaesusLiebtDich



Es ist an der Zeit, unserem Freund Käsus Christus zu gedenken!

Wir wollen Halloumi-teinander bei einem Abendmahl das Brot brechen, denn mit Freunden zu essen, das ist gut für die Seele. So Comté alle am 22.10.2016 ins RaumZeitLabor nach Mannheim-Emmental. Wenn die Babybel um 18.30 Uhr erklingt, werden wir die heilige Feta mit einem Havarti Unser beginnen. Zögert nicht und nehmt diese Einladung an: Damit wir wissen, wie viel Wein wir aus Wasser machen sollen, gebt uns ein göttliches Zeichen [†] bis zum 19.10.2016. Wir freuen uns, wenn ein jeder 10 Euro in den Klingelbeutel fallen lässt. Denkt daran: Käsus Christus ist auch für euren Stilton gestorben.

Brie-de sei mit euch. Amen.

[†] Keine Mail, kein Käse!

by flederrattie at 29. September 2016 00:00:00

31. August 2016

Mero’s Blog

I've been diagnosed with ADHD

tl;dr: I've been diagnosed with ADHD. I ramble incoherently for a while and I might do some less rambling posts about it in the future.

As the title says, I've been recently diagnosed with ADHD and I thought I'd try to be as open about it as possible and share my personal experiences with mental illness. That being said, I am also adding the disclaimer, that I have no training or special knowledge about it and that the fact that I have been diagnosed with ADHD does not mean I am an authoritative source on its effects, that this diagnoses is going to stick or that my experiences in any way generalize to other people who got the same diagnosis.

This will hopefully turn into a bit of a series of blog posts and I'd like to start it off with a general description of what lead me to look for a diagnosis and treatment in the first place. Some of the things I am only touching on here I might write more about in the future (see below for a non-exhaustive list). Or not. I have not yet decided :)


It is no secret (it's actually kind of a running gag) that I am a serious procrastinator. I always had trouble starting on something and staying with it; my graveyard of unfinished projects is huge. For most of my life, however, this hasn't been a huge problem to me. I was reasonably successful in compensating for it with a decent amount of intelligence (arrogant as that sounds). I never needed any homework and never needed to learn for tests in school and even at university I only spent very little time on both. The homework we got was short-term enough that procrastination was not a real danger, I finished it quickly and whenever there was a longer-term project to finish (such as a seminar-talk or learning for exams) I could cram for a night and get enough of an understanding of things to do a decent job.

However, that strategy did not work for either my bachelor, nor my master thesis, which predictably lead to both turning out a lot worse than I would've wished for (I am not going to go into too much detail here). Self-organized long-term work seemed next to impossible. This problem got much worse when I started working full-time. Now almost all my work was self-organized and long-term. Goals are set on a quarterly basis, the decision when and how and how much to work is completely up to you. Other employers might frown at their employees slacking off at work; where I work, it's almost expected. I was good at being oncall, which is mainly reactive, short-term problem solving. But I was (and am) completely dissatisfied with my project work. I felt that I did not get nearly as much done as I should or as I would want. My projects in my first quarter had very clear deadlines and I finished on time (I still procrastinated, but at least at some point I sat down until I got it done. It still meant staying at work until 2am the day before the deadline) but after that it went downhill fast, with projects that needed to be done ASAP, but not with a deadline. So I started slipping. I didn't finish my projects (in fact, the ones that I didn't drop I am still not done with), I spent weeks doing effectively nothing (I am not exaggerating here. I spent whole weeks not writing a single line of code, closing a single bug or running a single command, doing nothing but browse reddit, twitter and wasting my time in similar ways. Yes, you can waste a week doing nothing, while sitting at your desk), not being able to get myself to start working on anything and hating myself for it.

And while I am mostly talking about work, this affected my personal life too. Mail remains unopened, important errands don't get done, I am having trouble keeping in contact with friends, because I can always write or visit them some other time…

I tried (and had tried over the past years) several systems to organize myself better, to motivate myself and to remove distractions. I was initially determined to try to solve my problems on my own, that I did not really need professional help. However, at some point, I realized that I won't be able to fix this just by willing myself to it. I realized it in the final months of my master thesis, but convinced myself that I don't have time to fix it properly, after all, I have to write my thesis. I then kind of forgot about it (or rather: I procrastinated it) in the beginning of starting work, because things where going reasonably well. But it came back to me around the start of this year. After not finishing any of my projects in the first quarter. And after telling my coworkers and friends of my problems and them telling me that it's just impostor syndrome and a distorted view of myself (I'll go into why they where wrong some more later, possibly).

I couldn't help myself and I couldn't get effective help from my coworkers. So, in April, I finally decided to see a Psychologist. Previously the fear of the potential cost (or the stress of dealing with how that works with my insurance), the perceived complexity of finding one that is both accepting patients that are only publicly insured and is specialized on my particular kinds of issues and the perceived lack of time prevented me from doing so. Apart from a general doubt about its effectiveness and fear of the implications for my self-perception and world-view, of course.

Luckily one of the employee-benefits at my company is free and uncomplicated access to a Mental Health (or "Emotional well-being", what a fun euphemism) provider. It only took a single E-Mail and the meetings happen around 10 meters away from my desk. So I started seeing a psychologist on a regular basis (on average probably every one and a half weeks or so) and talking about my issues. I explained and described my problems and it went about as good as I feared; they tried to convince me that the real issue isn't my performance, but my perception of it (and I concede that I still have trouble coming up with hard, empirical evidence to present to people. Though it's performance review season right now. As I haven't done anything of substance in the past 6 months, maybe I will finally get that evidence…) and they tried to get me to adopt more systems to organize myself and remove distractions. All the while, I got worse and worse. My inability to get even the most basic things done or to concentrate even for an hour, even for five minutes, on anything of substance, combined with the inherent social isolation of moving to a new city and country, lead me into deep depressive episodes.

Finally, when my Psychologist in a session tried to get me to write down what was essentially an Unschedule (a system I knew about from reading "The Now Habit" myself when working at my bachelor thesis and that I even had moderate success with; for about two weeks), I broke down. I told them that I do not consider this a valuable use of these sessions, that I tried systems before, that I tried this particular system before and that I can find these kind of lifestyle advise on my own, in my free time. That the reason I was coming to these sessions was to get systematic, medical, professional help of the kind that I can't get from books. So we agreed, at that point, to pursue a diagnosis, as a precondition for treatment.

Which, basically, is where we are at now. The diagnostic process consisted of several sessions of questions about my symptoms, my childhood and my life in general, of filling out a couple of diagnostic surveys and having my siblings fill them out too (in the hope that they can fill in some of the blanks in my own memory about my childhood) and of several sessions of answering more questions from more surveys. And two weeks ago, I officially got the diagnosis ADHD. And the plan to attack it by a combination of therapy and medication (the latter, in particular, is really hard to get from books, for some reason :) ).

I just finished my first day on Methylphenidate (the active substance in Ritalin), specifically Concerta. And though this is definitely much too early to actually make definitive judgments on its effects and effectiveness, at least for this one day I was feeling really great and actively happy. Which, coincidentally, helped me to finally start on this post, to talk about mental health issues; a topic I've been wanting to talk about ever since I started this blog (again), but so far didn't really felt I could.


As I said, this is hopefully the first post in a small ongoing series. I am aware that it is long, rambling and probably contentious. It definitely won't get all my points across and will change the perception some people have of me (I can hear you thinking how all of this doesn't really speak "mental illness", how it seems implausible that someone with my CV would actually, objectively, get nothing done and how I am a drama queen and shouldn't try to solve my problems with dangerous medication). It's an intentionally broad overview of my process and its main purpose is to "out" myself publicly and create starting points for multiple, more specific, concise, interesting and convincing posts in the future. Things I might, or might not talk about are

  • My specific symptoms and how this has and still is influencing my life (and how, yes, this is actual an illness, not just a continuous label). In particular, there are things I wasn't associating with ADHD, which turn out to be relatively tightly linked.
  • How my medication is specifically affecting me and what it does to those symptoms. I can not overstate how fascinated I am with today's experience. I was wearing a visibly puzzled expression all day because I couldn't figure out what was happening. And then I couldn't stop smiling. :)
  • Possibly things about my therapy? I really don't know what to expect about that, though. Therapy is kind off the long play, so it's much harder to evaluate and talk about its effectiveness.
  • Why I consider us currently to be in the Middle Ages of mental health and why I think that in a hundred years or so people will laugh at how we currently deal with it. And possibly be horrified.
  • My over ten years (I'm still young, mkey‽) of thinking about my own mental health and mental health in general and my thoughts of how mental illness interacts with identity and self-definition.
  • How much I loathe the term "impostor syndrome" and why I am (still) convinced that I don't get enough done, even though I can't produce empirical evidence for that and people try to convince me otherwise. And what it does to you, to need to take the "I suck" side of an argument and still don't have people believe you.

Let me know, what you think :)

31. August 2016 02:22:38

15. August 2016

sur5r/blog

Calculating NXP LPC43xx checksums using srecord 1.58

As eveybody on the internet seems to be relying on either Keil on Windows or precompiled binaries from unknown sources to generate their checksums I investigated a bit...

Assuming you ran something like this to generate your binary image:

arm-none-eabi-objcopy -v -O binary firmware.elf firmware.bin

The resulting firmware.bin is still lacking the checksum which LPC chips use to check the validity of the user code.

The following command will create a file firmware_out.bin with the checksum applied:

srec_cat firmware.bin -binary -crop 0x0 0x1c -le-checksum-negative 0x1C 4 4 firmware.bin -binary -crop 0x20 -o firmware_out.bin -binary

by sur5r (nospam@example.com) at 15. August 2016 15:05:54

19. July 2016

RaumZeitLabor

MRMCD 2016: Vorverkauf läuft

Vorr. T-Shirt-Motiv

English version below

Seit einigen Wochen läuft der Vorverkauf zu den MRMCD 2016. Tickets und T-Shirts können noch bis Ende Juli unter presale.mrmcd.net bestellt werden.

Die Teilnahme am Vorverkauf erleichtert unsere Arbeit sehr, da er uns eine bessere Planung ermöglicht und die finanziellen Mittel verschafft, die wir vor der Konferenz schon brauchen. Es wird eine Abendkasse geben, an der allerdings keine T-Shirts und nur begrenzt Goodies erhältlich sind.

Wir sind unter klinikleitung@mrmcd.net für alle Fragen erreichbar.

Die MRMCD (MetaRheinMainChaosDays) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des CCC mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre. Das diesjährige Motto “diagnose: kritisch” setzt einen Themenschwerpunkt auf IT und Innovation rund um Medizin und Gesundheit.

Wir freuen uns bis zum 25.07. auch noch über zahlreiche Vortragseinreichungen im frab. Weitere Informationen gibt es auf unserer Website mrmcd.net.

The presale of this year’s MRMCD tickets has started a few weeks ago, you can buy your tickets and t-shirts at presale.mrmcd.net. The presale runs until July 25th.

With buying your tickets in advance, you make organizing this conference a lot easier for us. It enables us to properly plan the event and gives us the money we need to have in advance to buy all the things a conference needs. There will be a ticket sale on-site, but no t-shirts and no guaranteed goodies.

If you have any questions, please contact us at klinikleitung@mrmcd.net.

The MRMCD (MetaRheinMainChaosDays) are an annual IT conference of the CCC with a slight focus on IT security. MRMCD have been taking place in the Rhine-Main area for over 10 years. Ever since 2012 we cooperate with the the faculty of Computer Science of the University of Applied Sciences Darmstadt (h_da). Apart from the conference program, the MRMCD provide the opportunity of exchanges with IT experts in a relaxed atmosphere. This year’s motto “diagnosis: critical” sets a special focus on IT and innovation in the medical and health field.

We are still accepting talk submissions until the 25th of July and we look forward to your submission at frab.cccv.de. You can find further information on our website

by Alexander at 19. July 2016 00:00:00

28. June 2016

RaumZeitLabor

Impressionen aus dem Aufzugmuseum Seckenheim

Vergangenen Freitag hat sich eine kleine Delegation RaumZeitLaboranten aufgemacht, um den Seckenheimer Wasserturm zu besichtigen. Im Inneren befindet sich ein einizigartiges und sehr sehenswertes Museum rund ums Thema Aufzug.

In der rund zweistündigen Führung konnten wir allerlei Spannendes und Kurioses über Aufzüge und exklusive Insiderinformationen über die Firma Lochbühler erfahren - alles untermalt mit passenden Ausstellungsstücken. Eines der Highlights war der voll funktionstüchtige Paternoster, für dessen Befahrung wir uns allerdings vorher als Paternoster-Puppen hätten bewerben müssen.

Zum Schluss der Tour durften wir von der Lochbühler’schen Bar aus, die sich direkt unter der Wasserturm-Kuppel befindet, den Ausblick über Mannheim genießen.

Alles in allem ein gelungener Ausflug: Das Prädikatssiegel »Außenstelle des RaumZeitLabors« wurde selbstverständlich erteilt.

Aufzugsmuseum

by tabascoeye at 28. June 2016 00:00:00

20. June 2016

Moredreads Blog

16. June 2016

sECuREs Webseite

Conditionally tunneling SSH connections

Whereas most of the networks I regularly use (home, work, hackerspace, events, …) provide native IPv6 connectivity, sometimes I’m in a legacy-only network, e.g. when tethering via my phone on some mobile providers.

By far the most common IPv6-only service I use these days is SSH to my computer(s) at home. On philosophical grounds, I refuse to set up a dynamic DNS record and port-forwardings, so the alternative I use is either Miredo or tunneling through a dual-stacked machine. For the latter, I used to use the following SSH config:

Host home
        Hostname home.zekjur.net
        ProxyCommand ssh -4 dualstack -W %h:%p

The issue with that setup is that it’s inefficient when I’m in a network which does support IPv6, and it requires me to authenticate to both machines. These are not huge issues of course, but they bother me enough that I’ve gotten into the habit of commenting out/in the ProxyCommand directive regularly.

I’ve discovered that SSH can be told to use a ProxyCommand only when you don’t have a route to the public IPv6 internet, though:

Match host home exec "ip -6 route get 2001:7fd::1 | grep -q unreachable"
        ProxyCommand ssh -4 dualstack -W %h:%p

Host home
        Hostname home.zekjur.net

The IPv6 address used is from k.root-servers.net, but no packets are being sent — we merely ask the kernel for a route. When you don’t have an IPv6 address or default route, the ip command will print unreachable, which enables the ProxyCommand.

For debugging/verifying the setup works as expected, use ssh -vvv and look for lines prefixed with “Debug3”.

by Michael Stapelberg at 16. June 2016 17:20:00

02. June 2016

RaumZeitLabor

Exkursion in das Aufzugsmuseum in Seckenheim

Aufzugmuseum

Endlich ist es mal wieder soweit: Ausflugszeit! Oder vielmehr: Aufzugszeit! Gemeinsam werden wir das Aufzugsmuseum in Seckenheim besichtigen.

Der Wasserturm Seckenheim wurde von der Firma Lochbühler zu einem Museum umgebaut, in dem funktionsfähige Aufzüge, Komponenten und ihre Technik ab Ende des 19. Jahrhunderts zu besichtigen sind. Neben den »normalen« Aufzügen gibt es im Museum auch einen Paternoster zu besichtigen und - wenn wir lieb sind - vielleicht auch zu befahren.

Das ganze findet am Freitag, den 24. Juni 2016, statt. Wir treffen uns um 18.15 Uhr vor dem Eingang an der Kloppenheimer Straße 94 in Mannheim-Seckenheim. Die Führung beginnt dann um 18.30 Uhr und dauert voraussichtlich anderthalb Stunden.

Die maximale Teilnehmerzahl ist mit 20 Personen beschränkt, daher ist eine vorherige Anmeldung erforderlich. Um euch anzumelden, schreibt ihr mir einfach eine Mail mit dem Betreff “Exkursion Aufzugsmuseum” und der Anzahl gewünschter Plätze. Die Anmeldungen müssen bis zum 22. Juni 2016, 13:37 Uhr bei mir eingegangen sein.

Weitere Informationen findet ihr zum Beispiel auf der Seite der Rhein-Neckar-Industriekultur.

by flederrattie at 02. June 2016 00:00:00

01. June 2016

Moredreads Blog

26. May 2016

mfhs Blog

Introduction to awk programming block course

Last year I taught a course about bash scripting, during which I briefly touched on the scripting language awk. Some of the attending people wanted to hear more about this, so I was asked by my graduate school to prepare a short block course on awk programming for this year. The course will be running from 15th till 17th August 2016 and registration is now open. You can find an outline and further information on the "Introduction to awk" course page. If you cannot make the course in person, do not worry: All course material will be published both on the course website and on github afterwards.

by mfh at 26. May 2016 00:00:19

10. May 2016

sECuREs Webseite

Supermicro X11SSZ-QF workstation mainboard

Context

For the last 3 years I’ve used the hardware described in my 2012 article. In order to drive a hi-dpi display, I needed to install an nVidia graphics card, since only the nVidia hardware/software supported multi-tile displays requiring MST (Multiple Stream Transport) such as the Dell UP2414Q. While I’ve switched to a Viewsonic VX2475Smhl-4K in the meantime, I still needed a recent-enough DisplayPort output that could deliver 3840x2160@60Hz. This is not the case for the Intel Core i7-2600K’s integrated GPU, so I needed to stick with the nVidia card.

I then stumbled over a video file which, when played back using the nouveau driver’s VDPAU functionality, would lock up my graphics card entirely, so that only a cold reboot helped. This got me annoyed enough to upgrade my hardware.

Why the Supermicro X11SSZ-QF?

Intel, my standard pick for mainboards with good Linux support, unfortunately stopped producing desktop mainboards. I looked around a bit for Skylake mainboards and realized that the Intel Q170 Express chipset actually supports 2 DisplayPort outputs that each support 3840x2160@60Hz, enabling a multi-monitor hi-dpi display setup. While I don’t currently have multiple monitors and don’t intend to get another monitor in the near future, I thought it’d be nice to have that as a possibility.

Turns out that there are only two mainboards out there which use the Q170 Express chipset and actually expose two DisplayPort outputs: the Fujitsu D3402-B, and the Supermicro X11SSZ-QF. The Fujitsu one doesn’t have an integrated S/PDIF output, which I need to play audio on my Denon AVR-X1100W without a constant noise level. Also, I wasn’t able to find software downloads or even a manual for the board on the Fujitsu website. For Supermicro, you can find the manual and software very easily on their website, and because I bought Supermicro hardware in the past and was rather happy with it, I decided to go with the Supermicro option.

I’ve been using the board for half a year now, without any stability issues.

Mechanics and accessories

The X11SSZ-QF ships with a printed quick reference sheet, an I/O shield and 4 SATA cables. Unfortunately, Supermicro apparently went for the cheapest SATA cables they could find, as they do not have a clip to ensure they don’t slide off of the hard disk connector. This is rather disappointing for a mainboard that costs more than 300 CHF. Further, an S/PDIF bracket is not included, so I needed to order one from the USA.

The I/O shield comes with covers over each port, which I assume is because the X11SSZ mainboard family has different ports (one model has more ethernet ports, for example). When removing the covers, push them through from the rear side of the case (if you had installed it already). If you do it from the other side, a bit of metal will remain in each port.

Due to the positioning of the CPU socket, with my Fractal Design Define R3 case, one cannot reach the back of the CPU fan bracket when the mainboard is installed in the case. Hence, you need to first install the CPU fan, then install the mainboard. This is doable, you just need to realize it early enough and think about it, otherwise you’ll install the mainboard twice.

Integrated GPU not initialized

The integrated GPU is not initialized by default. You need to either install an external graphics card or use IPMI to enter the BIOS and change Advanced → Chipset Configuration → Graphics Configuration → Primary Display to “IGFX”.

For using IPMI, you need to connect the ethernet port IPMI_LAN (top right on the back panel, see the X11SSZ-QF quick reference guide) to a network which has a DHCP server, then connect to the IPMI’s IP address in a browser.

Overeager Fan Control

When I first powered up the mainboard, I was rather confused by the behavior: I got no picture (see above), but LED2 was blinking, meaning “PWR Fail or Fan Fail”. In addition, the computer seemed to turn itself off and on in a loop. After a while, I realized that it’s just the fan control which thinks my slow-spinning Scythe Mugen 3 Rev. B CPU fan is broken because of its low RPM value. The fan control subsequently spins up the fan to maximum speed, realizes the CPU is cool enough, spins down the fan, realizes the fan speed is too low, spins up the fan, etc.

Neither in the BIOS nor in the IPMI web interface did I find any options for the fan thresholds. Luckily, you can actually introspect and configure them using IPMI:

# apt-get install freeipmi-tools
# ipmi-sensors-config --filename=ipmi-sensors.config --checkout

In the human-readable text file ipmi-sensors.config you can now introspect the current configuration. You can see that FAN1 and FAN2 have sections in that file:

Section 607_FAN1
 Enable_All_Event_Messages Yes
 Enable_Scanning_On_This_Sensor Yes
 Enable_Assertion_Event_Lower_Critical_Going_Low Yes
 Enable_Assertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Assertion_Event_Upper_Critical_Going_High Yes
 Enable_Assertion_Event_Upper_Non_Recoverable_Going_High Yes
 Enable_Deassertion_Event_Lower_Critical_Going_Low Yes
 Enable_Deassertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Deassertion_Event_Upper_Critical_Going_High Yes
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
 Lower_Non_Critical_Threshold 700.000000
 Lower_Critical_Threshold 500.000000
 Lower_Non_Recoverable_Threshold 300.000000
 Upper_Non_Critical_Threshold 25300.000000
 Upper_Critical_Threshold 25400.000000
 Upper_Non_Recoverable_Threshold 25500.000000
 Positive_Going_Threshold_Hysteresis 100.000000
 Negative_Going_Threshold_Hysteresis 100.000000
EndSection

When running ipmi-sensors, you can see the current temperatures, voltages and fan readings. In my case, the fan spins with 700 RPM during normal operation, which was exactly the Lower_Non_Critical_Threshold in the default IPMI config. Hence, I modified my config file as illustrated by the following diff:

--- ipmi-sensors.config	2015-11-13 11:53:00.940595043 +0100
+++ ipmi-sensors-fixed.config	2015-11-13 11:54:49.955641295 +0100
@@ -206,11 +206,11 @@
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
- Lower_Non_Critical_Threshold 700.000000
+ Lower_Non_Critical_Threshold 400.000000
- Lower_Critical_Threshold 500.000000
+ Lower_Critical_Threshold 200.000000
- Lower_Non_Recoverable_Threshold 300.000000
+ Lower_Non_Recoverable_Threshold 0.000000
 Upper_Non_Critical_Threshold 25300.000000

You can install the new configuration using the --commit flag:

# ipmi-sensors-config --filename=ipmi-sensors-fixed.config --commit

You might need to shut down your computer and disconnect power for this change to take effect, since the BMC is running even when the mainboard is powered off.

S/PDIF output

The S/PDIF pin header on the mainboard just doesn’t work at all. It does not work in Windows 7 (for which the board was made), and it doesn’t work in Linux. Neither the digital nor the analog part of an S/PDIF port works. When introspecting the Intel HDA setup of the board, the S/PDIF output is not even hooked up correctly. Even after fixing that, it doesn’t work.

Of course, I’ve contacted the Supermicro support. After making clear to them what my use-case is, they ordered (!) an S/PDIF header and tested the analog part of it. Their technical support claims that their port is working, but they never replied to my question with which operating system they tested that, despite me asking multiple times.

It’s pretty disappointing to see that the support is unable to help here or at least confirm that it’s broken.

To address the issue, I’ve bought an ASUS Xonar DX sound card. It works out of the box on Linux, and its S/PDIF port works. The S/PDIF port is shared with the Line-in/Mic-in jack, but a suitable adapter is shipped with the card.

Wake-on-LAN

I haven’t gotten around to using Wake-on-LAN or Suspend-to-RAM yet. I will amend this section when I get around to it.

Conclusion

It’s clear that this mainboard is not for consumers. This begins with the awkward graphics and fan control setup and culminates in the apparently entirely untested S/PDIF output.

That said, once you get it working, it works reliably, and it seems like the only reasonable option with two onboard DisplayPort outputs.

by Michael Stapelberg at 10. May 2016 18:46:00

19. April 2016

RaumZeitLabor

MRMCD 2016: CFP

MRMCD Logo 2016

tl;dr: Die MRMCD 2016 finden vom 02. bis 04. September in Darmstadt statt.

Die MRMCD-Klinik Darmstadt ist eine IT-Security-Klinik der Maximalversorgung. Wir sind auf der Suche nach praktizierenden Teilnehmern aus allen Bereichen der IT und Hackerkultur und würden uns freuen, wenn Sie einen Teil zu unserem Behandlungsprogramm beitragen können. Die Schwerpunkte unserer Arbeit liegen unter anderem auf der Netzwerkchirurgie und Kryptographie, der plastischen und rekonstruktiven Open-Source-Entwicklung aber auch dem Einsatz von Robotern in der Medizin. Eingebunden ist eines der größten und modernsten Zentren in Deutschland für die Behandlung schwerer und schwerster Medical Device Hacks sowie eine Klinik bei Verletzungen der Hackethik. Somit sind die MRMCD2016 das ideale Umfeld für Ihre aktuellen Forschungsprojekte. Unser fachkundiges Publikum interessiert sich für innovative Therapieansätze aus allen Bereichen der IT. Eine Erstdiagnose der Konferenzthemen aus den vergangenen Jahren ist im Klinikarchiv verfügbar.

Sie sind bereit für neue Herausforderungen?

Reichen Sie Ihre aussagekräftige Bewerbung bis zum 25.07.2016 in unserem Bewerberportal ein. Auch wenn Ihre gewünschte Fachdisziplin nicht zu unseren Schwerpunkten gehören sollte, freuen wir uns selbstverständlich auf eine Initiativbewerbung. Allen neuen Mitarbeiterinnen und Mitarbeitern wird die Stellenvergabe am 01.08.2016 bekannt gegeben. Sofern Ihre Bewerbung im Rahmen unseres CFPs Berücksichtigung findet, können Sie sich schon jetzt auf unsere bekannte Rundumversorgung in der exklusiven Chefarztlounge freuen.

Fragen zu den Bewerbungen und zur MRMCD-Klinik beantwortet das Personal per E-Mail.

Die MetaRheinMainChaosDays (MRMCD) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des Chaos Computer Clubs (CCC) mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre.

Die diesjährigen MRMCD finden vom 2.-4. September statt. Weitere Infos finden sich auf 2016.mrmcd.net.

by Alexander at 19. April 2016 00:00:00

14. March 2016

mfhs Blog

[c¼h] Testen mit Rapidcheck und Catch

Last Thursday, I once gave another short talk at the Heidelberg Chaostreff NoName e.V. This time I talked about writing tests in C++ using the testing libraries rapidcheck and Catch.

In the talk I presented some ideas how to incorporate property-based testing into test suites for C++ programs. The idea of property-based testing originates from the Haskell library QuickCheck, which tries to use properties of the input data and of properties of the output data in order to generate random test cases for a code unit. So instead of testing a piece of code with just a small number of static tests, which are the same for each run of the test suite, we test with randomly seeded data. Additionally, if a test fails, QuickCheck/rapidcheck automatically simplifies the test case in order to find the simplest input, which yields a failing test. This of cause eases finding the underlying bug massively.

Since this was our first Treff in the new Mathematikon building, the University just opened recently, we had a few technical difficulties with our setup. As a result there is no recording of my talk available this time, unfortunately. The example code I used during the presentation, however, is available on github. It contains a couple of buggy classes and functions and a Catch based test program, which can be used to find these bugs.

Link Licence
c14h-rapidcheck-catch repository GNU GPL v3

by mfh at 14. March 2016 00:00:25

06. March 2016

sECuREs Webseite

Docker on Travis for new tools and fast runs

Like many other open source projects, the i3 window manager is using Travis CI for continuous integration (CI). In our specific case, we not only verify that every pull request compiles and the test suite still passes, but we also ensure the code is auto-formatted using clang-format, does not contain detectable spelling errors and does not accidentally use C functions like sprintf() without error checking.

By offering their CI service for free, Travis provides a great service to the open source community, and I’m very thankful for that. Automatically running the test suite for contributions and displaying the results alongside the pull request is a feature that I’ve long wanted, but would have never gotten around to implementing in the home-grown code review system we used before moving to GitHub.

Motivation (more recent build environment)

Nothing is perfect, though, and some aspects of Travis can make it hard to work with. In particular, the build environment they provide is rather old: at the time of writing, the latest you can get is Ubuntu Trusty, which was released almost two years ago. I realize that Ubuntu Trusty is the current Ubuntu Long-Term Support release, but we want to move a bit quicker than being able to depend on new packages roughly once every two years.

For quite a while, we had to make do with that old environment. As a mitigation, in our .travis.yml file, we added the whitelisted ubuntu-toolchain-r-test source for newer versions of clang (notably also clang-format) and GCC. For integrating lintian’s spell checking into our CI infrastructure, we needed a newer lintian version, as the version in Ubuntu Trusty doesn’t have an interface for external scripts to use. Trying to make our .travis.yml file install a newer version of lintian (and only lintian!) was really challenging. To get a rough idea, take a look at our .travis.yml before we upgraded to Ubuntu Trusty and were stuck with Ubuntu Precise. Cherry-picking a newer lintian version into Trusty would have been even more complicated.

With Travis starting to offer Docker in their build environment, and by looking at Docker’s contribution process, which also makes heavy use of containers, we were able to put together a better solution:

Implementation

The basic idea is to build a Docker container based on Debian testing and then run all build/test commands inside that container. Our Dockerfile installs compilers, formatters and other development tools first, then installs all build dependencies for i3 based on the debian/control file, so that we don’t need to duplicate build dependencies for Travis and for Debian.

This solves the immediate issue nicely, but comes at a significant cost: building a Docker container adds quite a bit of wall clock time to a Travis run, and we want to give our contributors quick feedback. The solution to long build times is caching: we can simply upload the Docker container to the Docker Hub and make subsequent builds use the cached version.

We decided to cache the container for a month, or until inputs to the build environment (currently the Dockerfile and debian/control) change. Technically, this is implemented by a little shell script called ha.sh (get it? hash!) which prints the SHA-256 hash of the input files. This hash, appended to the current month, is what we use as tag for the Docker container, e.g. 2016-03-3d453fe1.

See our .travis.yml for how to plug it all together.

Conclusion

We’ve been successfully using this setup for a bit over a month now. The advantages over pure Travis are:

  1. Our build environment is more recent, so we do not depend on Travis when we want to adopt tools that are only present in more recent versions of Linux.
  2. CI runs are faster: what used to take about 5 minutes now takes only 1-2 minutes.
  3. As a nice side effect, contributors can now easily run the tests in the same environment that we use on Travis.

There is some potential for even quicker CI runs: currently, all the different steps are run in sequence, but some of them could run in parallel. Unfortunately, Travis currently doesn’t provide a nice way to specify the dependency graph or to expose the different parts of a CI run in the pull request itself.

by Michael Stapelberg at 06. March 2016 19:00:00

01. January 2016

sECuREs Webseite

Prometheus: Using the blackbox exporter

Up until recently, I used to use kanla, a simple alerting program that I wrote 4 years ago. Back then, delivering alerts via XMPP (Jabber) to mobile devices like Android smartphones seemed like the best course of action.

About a year ago, I’ve started using Prometheus for collecting monitoring data and alerting based on that data. See „Monitoring mit Prometheus“, my presentation about the topic at GPN, for more details and my experiences.

Motivation to switch to the Blackbox Exporter

Given that the Prometheus Alertmanager is already configured to deliver alerts to my mobile device, it seemed silly to rely on two entirely different mechanisms. Personally, I’m using Pushover, but Alertmanager integrates with many popular providers, and it’s easy to add another one.

Originally, I considered extending kanla in such a way that it would talk to Alertmanager, but then I realized that the Prometheus Blackbox Exporter is actually a better fit: it’s under active development and any features that are added to it benefit a larger number of people than the small handful of kanla users.

Hence, I switched from having kanla probe my services to having the Blackbox Exporter probe my services. The rest of the article outlines my configuration in the hope that it’s useful for others who are in a similar situation.

I’m assuming that you are already somewhat familiar with Prometheus and just aren’t using the Blackbox Exporter yet.

Blackbox Exporter: HTTP

The first service I wanted to probe is Debian Code Search. The following blackbox.yml configuration file defines a module called “dcs_results” which, when called, downloads the specified URL via a HTTP GET request. The probe is considered failed when the download doesn’t finish within the timeout of 5 seconds, or when the resulting HTTP body does not contain the text “load_font”.

modules:
  dcs_results:
    prober: http
    timeout: 5s
    http:
      fail_if_not_matches_regexp:
      - "load_font"

In my prometheus.conf, this is how I invoke the probe:

- job_name: blackbox_dcs_results
  scrape_interval: 60s
  metrics_path: /probe
  params:
    module: [dcs_results]
    target: ['http://codesearch.debian.net/search?q=i3Font']
  scheme: http
  target_groups:
  - targets:
    - blackbox-exporter:9115

As you can see, the search query is “i3Font”, and I know that “load_font” is one of the results. In case Debian Code Search does not deliver the expected search results, I know something is seriously broken. To make Prometheus actually generate an alert when this probe fails, we need an alert definition like the following:

ALERT ProbeFailing
  IF probe_success < 1
  FOR 15m
  WITH {
    job="blackbox_exporter"
  }
  SUMMARY "probe {{$labels.job}} failing"
  DESCRIPTION "probe {{$labels.job}} failing"

Blackbox Exporter: IRC

With the TCP probe module’s query/response feature, we can configure a module that verifies an IRC server lets us log in:

modules:
  irc_banner:
    prober: tcp
    timeout: 5s
    tcp:
      query_response:
      - send: "NICK prober"
      - send: "USER prober prober prober :prober"
      - expect: "PING :([^ ]+)"
        send: "PONG ${1}"
      - expect: "^:[^ ]+ 001"

Blackbox Exporter: Git

The query/response feature can also be used for slightly more complex protocols. To verify a Git server is available, we can use the following configuration:

modules:
  git_code_i3wm_org:
    prober: tcp
    timeout: 5s
    tcp:
      query_response:
      - send: "002bgit-upload-pack /i3\x00host=code.i3wm.org\x00"
      - expect: "^[0-9a-f]+ HEAD\x00"

Note that the first characters are the ASCII-encoded hex length of the entire line:

$ echo -en '0000git-upload-pack /i3\x00host=code.i3wm.org\x00' | wc -c
43
$ perl -E 'say sprintf("%04x", 43)'
002b

The corresponding git URL for the example above is git://code.i3wm.org/i3. You can read more about the git protocol at Documentation/technical/pack-protocol.txt.

Blackbox Exporter: Meta-monitoring

Don’t forget to add an alert that will fire if the blackbox exporter is not available:

ALERT BlackboxExporterDown
  IF count(up{job="blackbox_dcs_results"} == 1) < 1
  FOR 15m
  WITH {
    job="blackbox_meta"
  }
  SUMMARY "blackbox-exporter is not up"
  DESCRIPTION "blackbox-exporter is not up"

by Michael Stapelberg at 01. January 2016 19:00:00