Planet NoName e.V.

10. May 2016

sECuREs Webseite

Supermicro X11SSZ-QF workstation mainboard


For the last 3 years I’ve used the hardware described in my 2012 article. In order to drive a hi-dpi display, I needed to install an nVidia graphics card, since only the nVidia hardware/software supported multi-tile displays requiring MST (Multiple Stream Transport) such as the Dell UP2414Q. While I’ve switched to a Viewsonic VX2475Smhl-4K in the meantime, I still needed a recent-enough DisplayPort output that could deliver 3840x2160@60Hz. This is not the case for the Intel Core i7-2600K’s integrated GPU, so I needed to stick with the nVidia card.

I then stumbled over a video file which, when played back using the nouveau driver’s VDPAU functionality, would lock up my graphics card entirely, so that only a cold reboot helped. This got me annoyed enough to upgrade my hardware.

Why the Supermicro X11SSZ-QF?

Intel, my standard pick for mainboards with good Linux support, unfortunately stopped producing desktop mainboards. I looked around a bit for Skylake mainboards and realized that the Intel Q170 Express chipset actually supports 2 DisplayPort outputs that each support 3840x2160@60Hz, enabling a multi-monitor hi-dpi display setup. While I don’t currently have multiple monitors and don’t intend to get another monitor in the near future, I thought it’d be nice to have that as a possibility.

Turns out that there are only two mainboards out there which use the Q170 Express chipset and actually expose two DisplayPort outputs: the Fujitsu D3402-B, and the Supermicro X11SSZ-QF. The Fujitsu one doesn’t have an integrated S/PDIF output, which I need to play audio on my Denon AVR-X1100W without a constant noise level. Also, I wasn’t able to find software downloads or even a manual for the board on the Fujitsu website. For Supermicro, you can find the manual and software very easily on their website, and because I bought Supermicro hardware in the past and was rather happy with it, I decided to go with the Supermicro option.

I’ve been using the board for half a year now, without any stability issues.

Mechanics and accessories

The X11SSZ-QF ships with a printed quick reference sheet, an I/O shield and 4 SATA cables. Unfortunately, Supermicro apparently went for the cheapest SATA cables they could find, as they do not have a clip to ensure they don’t slide off of the hard disk connector. This is rather disappointing for a mainboard that costs more than 300 CHF. Further, an S/PDIF bracket is not included, so I needed to order one from the USA.

The I/O shield comes with covers over each port, which I assume is because the X11SSZ mainboard family has different ports (one model has more ethernet ports, for example). When removing the covers, push them through from the rear side of the case (if you had installed it already). If you do it from the other side, a bit of metal will remain in each port.

Due to the positioning of the CPU socket, with my Fractal Design Define R3 case, one cannot reach the back of the CPU fan bracket when the mainboard is installed in the case. Hence, you need to first install the CPU fan, then install the mainboard. This is doable, you just need to realize it early enough and think about it, otherwise you’ll install the mainboard twice.

Integrated GPU not initialized

The integrated GPU is not initialized by default. You need to either install an external graphics card or use IPMI to enter the BIOS and change Advanced → Chipset Configuration → Graphics Configuration → Primary Display to “IGFX”.

For using IPMI, you need to connect the ethernet port IPMI_LAN (top right on the back panel, see the X11SSZ-QF quick reference guide) to a network which has a DHCP server, then connect to the IPMI’s IP address in a browser.

Overeager Fan Control

When I first powered up the mainboard, I was rather confused by the behavior: I got no picture (see above), but LED2 was blinking, meaning “PWR Fail or Fan Fail”. In addition, the computer seemed to turn itself off and on in a loop. After a while, I realized that it’s just the fan control which thinks my slow-spinning Scythe Mugen 3 Rev. B CPU fan is broken because of its low RPM value. The fan control subsequently spins up the fan to maximum speed, realizes the CPU is cool enough, spins down the fan, realizes the fan speed is too low, spins up the fan, etc.

Neither in the BIOS nor in the IPMI web interface did I find any options for the fan thresholds. Luckily, you can actually introspect and configure them using IPMI:

# apt-get install freeipmi-tools
# ipmi-sensors-config --filename=ipmi-sensors.config --checkout

In the human-readable text file ipmi-sensors.config you can now introspect the current configuration. You can see that FAN1 and FAN2 have sections in that file:

Section 607_FAN1
 Enable_All_Event_Messages Yes
 Enable_Scanning_On_This_Sensor Yes
 Enable_Assertion_Event_Lower_Critical_Going_Low Yes
 Enable_Assertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Assertion_Event_Upper_Critical_Going_High Yes
 Enable_Assertion_Event_Upper_Non_Recoverable_Going_High Yes
 Enable_Deassertion_Event_Lower_Critical_Going_Low Yes
 Enable_Deassertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Deassertion_Event_Upper_Critical_Going_High Yes
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
 Lower_Non_Critical_Threshold 700.000000
 Lower_Critical_Threshold 500.000000
 Lower_Non_Recoverable_Threshold 300.000000
 Upper_Non_Critical_Threshold 25300.000000
 Upper_Critical_Threshold 25400.000000
 Upper_Non_Recoverable_Threshold 25500.000000
 Positive_Going_Threshold_Hysteresis 100.000000
 Negative_Going_Threshold_Hysteresis 100.000000

When running ipmi-sensors, you can see the current temperatures, voltages and fan readings. In my case, the fan spins with 700 RPM during normal operation, which was exactly the Lower_Non_Critical_Threshold in the default IPMI config. Hence, I modified my config file as illustrated by the following diff:

--- ipmi-sensors.config	2015-11-13 11:53:00.940595043 +0100
+++ ipmi-sensors-fixed.config	2015-11-13 11:54:49.955641295 +0100
@@ -206,11 +206,11 @@
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
- Lower_Non_Critical_Threshold 700.000000
+ Lower_Non_Critical_Threshold 400.000000
- Lower_Critical_Threshold 500.000000
+ Lower_Critical_Threshold 200.000000
- Lower_Non_Recoverable_Threshold 300.000000
+ Lower_Non_Recoverable_Threshold 0.000000
 Upper_Non_Critical_Threshold 25300.000000

You can install the new configuration using the --commit flag:

# ipmi-sensors-config --filename=ipmi-sensors-fixed.config --commit

You might need to shut down your computer and disconnect power for this change to take effect, since the BMC is running even when the mainboard is powered off.

S/PDIF output

The S/PDIF pin header on the mainboard just doesn’t work at all. It does not work in Windows 7 (for which the board was made), and it doesn’t work in Linux. Neither the digital nor the analog part of an S/PDIF port works. When introspecting the Intel HDA setup of the board, the S/PDIF output is not even hooked up correctly. Even after fixing that, it doesn’t work.

Of course, I’ve contacted the Supermicro support. After making clear to them what my use-case is, they ordered (!) an S/PDIF header and tested the analog part of it. Their technical support claims that their port is working, but they never replied to my question with which operating system they tested that, despite me asking multiple times.

It’s pretty disappointing to see that the support is unable to help here or at least confirm that it’s broken.

To address the issue, I’ve bought an ASUS Xonar DX sound card. It works out of the box on Linux, and its S/PDIF port works. The S/PDIF port is shared with the Line-in/Mic-in jack, but a suitable adapter is shipped with the card.


I haven’t gotten around to using Wake-on-LAN or Suspend-to-RAM yet. I will amend this section when I get around to it.


It’s clear that this mainboard is not for consumers. This begins with the awkward graphics and fan control setup and culminates in the apparently entirely untested S/PDIF output.

That said, once you get it working, it works reliably, and it seems like the only reasonable option with two onboard DisplayPort outputs.

by Michael Stapelberg at 10. May 2016 18:46:00

19. April 2016



MRMCD Logo 2016

tl;dr: Die MRMCD 2016 finden vom 02. bis 04. September in Darmstadt statt.

Die MRMCD-Klinik Darmstadt ist eine IT-Security-Klinik der Maximalversorgung. Wir sind auf der Suche nach praktizierenden Teilnehmern aus allen Bereichen der IT und Hackerkultur und würden uns freuen, wenn Sie einen Teil zu unserem Behandlungsprogramm beitragen können. Die Schwerpunkte unserer Arbeit liegen unter anderem auf der Netzwerkchirurgie und Kryptographie, der plastischen und rekonstruktiven Open-Source-Entwicklung aber auch dem Einsatz von Robotern in der Medizin. Eingebunden ist eines der größten und modernsten Zentren in Deutschland für die Behandlung schwerer und schwerster Medical Device Hacks sowie eine Klinik bei Verletzungen der Hackethik. Somit sind die MRMCD2016 das ideale Umfeld für Ihre aktuellen Forschungsprojekte. Unser fachkundiges Publikum interessiert sich für innovative Therapieansätze aus allen Bereichen der IT. Eine Erstdiagnose der Konferenzthemen aus den vergangenen Jahren ist im Klinikarchiv verfügbar.

Sie sind bereit für neue Herausforderungen?

Reichen Sie Ihre aussagekräftige Bewerbung bis zum 25.07.2016 in unserem Bewerberportal ein. Auch wenn Ihre gewünschte Fachdisziplin nicht zu unseren Schwerpunkten gehören sollte, freuen wir uns selbstverständlich auf eine Initiativbewerbung. Allen neuen Mitarbeiterinnen und Mitarbeitern wird die Stellenvergabe am 01.08.2016 bekannt gegeben. Sofern Ihre Bewerbung im Rahmen unseres CFPs Berücksichtigung findet, können Sie sich schon jetzt auf unsere bekannte Rundumversorgung in der exklusiven Chefarztlounge freuen.

Fragen zu den Bewerbungen und zur MRMCD-Klinik beantwortet das Personal per E-Mail.

Die MetaRheinMainChaosDays (MRMCD) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des Chaos Computer Clubs (CCC) mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre.

Die diesjährigen MRMCD finden vom 2.-4. September statt. Weitere Infos finden sich auf

by Alexander at 19. April 2016 00:00:00

14. March 2016

mfhs Blog

[c¼h] Testen mit Rapidcheck und Catch

Last Thursday, I once gave another short talk at the Heidelberg Chaostreff NoName e.V. This time I talked about writing tests in C++ using the testing libraries rapidcheck and Catch.

In the talk I presented some ideas how to incorporate property-based testing into test suites for C++ programs. The idea of property-based testing originates from the Haskell library QuickCheck, which tries to use properties of the input data and of properties of the output data in order to generate random test cases for a code unit. So instead of testing a piece of code with just a small number of static tests, which are the same for each run of the test suite, we test with randomly seeded data. Additionally, if a test fails, QuickCheck/rapidcheck automatically simplifies the test case in order to find the simplest input, which yields a failing test. This of cause eases finding the underlying bug massively.

Since this was our first Treff in the new Mathematikon building, the University just opened recently, we had a few technical difficulties with our setup. As a result there is no recording of my talk available this time, unfortunately. The example code I used during the presentation, however, is available on github. It contains a couple of buggy classes and functions and a Catch based test program, which can be used to find these bugs.

Link Licence
c14h-rapidcheck-catch repository GNU GPL v3

by mfh at 14. March 2016 00:00:25

06. March 2016

sECuREs Webseite

Docker on Travis for new tools and fast runs

Like many other open source projects, the i3 window manager is using Travis CI for continuous integration (CI). In our specific case, we not only verify that every pull request compiles and the test suite still passes, but we also ensure the code is auto-formatted using clang-format, does not contain detectable spelling errors and does not accidentally use C functions like sprintf() without error checking.

By offering their CI service for free, Travis provides a great service to the open source community, and I’m very thankful for that. Automatically running the test suite for contributions and displaying the results alongside the pull request is a feature that I’ve long wanted, but would have never gotten around to implementing in the home-grown code review system we used before moving to GitHub.

Motivation (more recent build environment)

Nothing is perfect, though, and some aspects of Travis can make it hard to work with. In particular, the build environment they provide is rather old: at the time of writing, the latest you can get is Ubuntu Trusty, which was released almost two years ago. I realize that Ubuntu Trusty is the current Ubuntu Long-Term Support release, but we want to move a bit quicker than being able to depend on new packages roughly once every two years.

For quite a while, we had to make do with that old environment. As a mitigation, in our .travis.yml file, we added the whitelisted ubuntu-toolchain-r-test source for newer versions of clang (notably also clang-format) and GCC. For integrating lintian’s spell checking into our CI infrastructure, we needed a newer lintian version, as the version in Ubuntu Trusty doesn’t have an interface for external scripts to use. Trying to make our .travis.yml file install a newer version of lintian (and only lintian!) was really challenging. To get a rough idea, take a look at our .travis.yml before we upgraded to Ubuntu Trusty and were stuck with Ubuntu Precise. Cherry-picking a newer lintian version into Trusty would have been even more complicated.

With Travis starting to offer Docker in their build environment, and by looking at Docker’s contribution process, which also makes heavy use of containers, we were able to put together a better solution:


The basic idea is to build a Docker container based on Debian testing and then run all build/test commands inside that container. Our Dockerfile installs compilers, formatters and other development tools first, then installs all build dependencies for i3 based on the debian/control file, so that we don’t need to duplicate build dependencies for Travis and for Debian.

This solves the immediate issue nicely, but comes at a significant cost: building a Docker container adds quite a bit of wall clock time to a Travis run, and we want to give our contributors quick feedback. The solution to long build times is caching: we can simply upload the Docker container to the Docker Hub and make subsequent builds use the cached version.

We decided to cache the container for a month, or until inputs to the build environment (currently the Dockerfile and debian/control) change. Technically, this is implemented by a little shell script called (get it? hash!) which prints the SHA-256 hash of the input files. This hash, appended to the current month, is what we use as tag for the Docker container, e.g. 2016-03-3d453fe1.

See our .travis.yml for how to plug it all together.


We’ve been successfully using this setup for a bit over a month now. The advantages over pure Travis are:

  1. Our build environment is more recent, so we do not depend on Travis when we want to adopt tools that are only present in more recent versions of Linux.
  2. CI runs are faster: what used to take about 5 minutes now takes only 1-2 minutes.
  3. As a nice side effect, contributors can now easily run the tests in the same environment that we use on Travis.

There is some potential for even quicker CI runs: currently, all the different steps are run in sequence, but some of them could run in parallel. Unfortunately, Travis currently doesn’t provide a nice way to specify the dependency graph or to expose the different parts of a CI run in the pull request itself.

by Michael Stapelberg at 06. March 2016 19:00:00

01. January 2016

sECuREs Webseite

Prometheus: Using the blackbox exporter

Up until recently, I used to use kanla, a simple alerting program that I wrote 4 years ago. Back then, delivering alerts via XMPP (Jabber) to mobile devices like Android smartphones seemed like the best course of action.

About a year ago, I’ve started using Prometheus for collecting monitoring data and alerting based on that data. See „Monitoring mit Prometheus“, my presentation about the topic at GPN, for more details and my experiences.

Motivation to switch to the Blackbox Exporter

Given that the Prometheus Alertmanager is already configured to deliver alerts to my mobile device, it seemed silly to rely on two entirely different mechanisms. Personally, I’m using Pushover, but Alertmanager integrates with many popular providers, and it’s easy to add another one.

Originally, I considered extending kanla in such a way that it would talk to Alertmanager, but then I realized that the Prometheus Blackbox Exporter is actually a better fit: it’s under active development and any features that are added to it benefit a larger number of people than the small handful of kanla users.

Hence, I switched from having kanla probe my services to having the Blackbox Exporter probe my services. The rest of the article outlines my configuration in the hope that it’s useful for others who are in a similar situation.

I’m assuming that you are already somewhat familiar with Prometheus and just aren’t using the Blackbox Exporter yet.

Blackbox Exporter: HTTP

The first service I wanted to probe is Debian Code Search. The following blackbox.yml configuration file defines a module called “dcs_results” which, when called, downloads the specified URL via a HTTP GET request. The probe is considered failed when the download doesn’t finish within the timeout of 5 seconds, or when the resulting HTTP body does not contain the text “load_font”.

    prober: http
    timeout: 5s
      - "load_font"

In my prometheus.conf, this is how I invoke the probe:

- job_name: blackbox_dcs_results
  scrape_interval: 60s
  metrics_path: /probe
    module: [dcs_results]
    target: ['']
  scheme: http
  - targets:
    - blackbox-exporter:9115

As you can see, the search query is “i3Font”, and I know that “load_font” is one of the results. In case Debian Code Search does not deliver the expected search results, I know something is seriously broken. To make Prometheus actually generate an alert when this probe fails, we need an alert definition like the following:

ALERT ProbeFailing
  IF probe_success < 1
  FOR 15m
  WITH {
  SUMMARY "probe {{$labels.job}} failing"
  DESCRIPTION "probe {{$labels.job}} failing"

Blackbox Exporter: IRC

With the TCP probe module’s query/response feature, we can configure a module that verifies an IRC server lets us log in:

    prober: tcp
    timeout: 5s
      - send: "NICK prober"
      - send: "USER prober prober prober :prober"
      - expect: "PING :([^ ]+)"
        send: "PONG ${1}"
      - expect: "^:[^ ]+ 001"

Blackbox Exporter: Git

The query/response feature can also be used for slightly more complex protocols. To verify a Git server is available, we can use the following configuration:

    prober: tcp
    timeout: 5s
      - send: "002bgit-upload-pack /i3\\x00"
      - expect: "^[0-9a-f]+ HEAD\x00"

Note that the first characters are the ASCII-encoded hex length of the entire line:

$ echo -en '0000git-upload-pack /i3\\x00' | wc -c
$ perl -E 'say sprintf("%04x", 43)'

The corresponding git URL for the example above is git:// You can read more about the git protocol at Documentation/technical/pack-protocol.txt.

Blackbox Exporter: Meta-monitoring

Don’t forget to add an alert that will fire if the blackbox exporter is not available:

ALERT BlackboxExporterDown
  IF count(up{job="blackbox_dcs_results"} == 1) < 1
  FOR 15m
  WITH {
  SUMMARY "blackbox-exporter is not up"
  DESCRIPTION "blackbox-exporter is not up"

by Michael Stapelberg at 01. January 2016 19:00:00

20. December 2015

sECuREs Webseite

(Not?) Hosting small projects on Container Engine

Note: the postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.


For the last couple of years, was running on a dedicated server I rented. I partitioned that server into multiple virtual machines using KVM, and one of these VMs contained the installation. In that VM, I directly used pip to install the django-based askbot, a stack overflow-like questions & answers web application.

Every upgrade of askbot brought with it at least some small annoyances. For example, with the django 1.8 release, one had to change the cache settings (I would have expected compatibility, or at least a suggested/automated config file update). A new release of a library dependency broke askbot installation. The askbot 0.9.0 release was not installable on Debian-based systems. In conclusion, for every upgrade you’d need to plan a couple of hours for identifying and possibly fixing these numerous small issues.

Once Docker was released, I started using askbot in Docker containers. I’ll talk a bit more about the advantages in the next section.

With the software upgrade burden largely mitigated by using Docker, I’ve had some time to consider bigger issues, namely disaster recovery and failure tolerance. The story for disaster recovery up to that point was daily off-site backups of the underlying PostgreSQL database. Because askbot was packaged in a Docker container, it became feasible to quickly get exactly the same version back up and running on a new server. But what about failure tolerance? If the server which runs the askbot Docker container suddenly dies, I would need to manually bring up the replacement instance from the most recent backup, and in the timespan between the hardware failure and my intervention, the FAQ would be unreachable.

The desire to make hardware failures a non-event for our users lead me to evaluate Google Container Engine (abbreviated GKE) for hosting The rest of the article walks through the motivation behind each layer of technology that’s used when hosting on GKE, how exactly one would go about it, how much one needs to pay for such a setup and concludes with my overall thoughts on the experience.

Motivation behind each layer

Google Container Engine is a hosted version of Kubernetes, which in turn schedules Docker containers on servers and ensures that they are staying up. As an example, you can express “I always want to have one instance of the prosody/prosody Docker container running, with the Persistent Disk volume prosody-disk mounted at /var/lib/prosody” (prosody is an XMPP server). Or, you could make it 50 instances, just by changing a single number in your configuration file.

So, let’s dissect the various layers of technology that we’re using when we run containers on GKE and see what each of them provides, from the lowest layer upwards.


Docker combines two powerful aspects:

  1. Docker allows us to package applications with a common interface. No matter which application I want to run on my server, all I need is to tell Docker the container name (e.g. prom/prometheus) and then configure a subset of volumes, ports, environment variables and links between the different containers.
  2. Docker containers are self-contained and can (I think they should!) be treated as immutable snapshots of an application.

This results in a couple of nice properties:

  • Moving applications between servers: Much like live-migration of Virtual Machines, it becomes really easy to move an application from one server to another. This covers both: regular server migrations and emergency procedures.
  • Being able to easily test a new version and revert, if necessary: Much like filesystem snapshots, you can easily switch back and forth between different versions of the same software, just by telling Docker to start e.g. prom/node-exporter:0.10.0 instead of prom/node-exporter:0.9.0. Notably, if you treat containers themselves as read-only and use volumes for storage, you might be able to revert to an older version without having to throw away the data that you have accumulated since you upgraded (provided there were no breaking changes in the data structure).
  • Upstream can provide official Docker images instead of relying on Linux distributions to package their software. Notably, this does away with the notion that Linux distributions provide value by integrating applications into their own configuration system or structure. Instead, software distribution gets unified across Linux distributions. This property also pushes out the responsibility for security updates from the Linux distribution to the application provider, which might be good or bad, depending on the specific situation and point of view.


Kubernetes is the layer which makes multiple servers behave like a single, big server. It abstracts individual servers away:

  • Machine failures are no longer a problem: When a server becomes unavailable for whichever reason, the containers which were running on it will be brought up on a different server. Note that this implies some sort of machine-independent storage solution, like Persistent Disk, and also multiple failure domains (e.g. multiple servers) to begin with.
  • Updates of the underlying servers get easier, because Kubernetes takes care of re-scheduling the containers elsewhere.
  • Scaling out a service becomes easier: you adjust the number of replicas, and Kubernetes takes care of bringing up that number of Docker containers.
  • Configuration gets a bit easier: Kubernetes has a declarative configuration language where you express your intent, and Kubernetes will make it happen. In comparison to running Docker containers with “docker run” from a systemd service file, this is an improvement because the number of edge-cases in reliably running a Docker container is fairly high.

Google Container Engine

While one could rent some dedicated servers and run Kubernetes on them, Google Container Engine offers that as a service. GKE offers some nice improvements over a self-hosted Kubernetes setup:

  • It’s a hosted environment, i.e. you don’t need to do updates and testing yourself, and you can escalate any problems.
  • Persistent Disk: your data will be stored on a distributed file system and you will not have to deal with dying disks yourself.
  • Persistent Disk snapshots are globally replicated (!), providing an additional level of data protection in case there is a catastrophical failure in an entire datacenter or region.
  • Logs are centrally collected, so you don’t need to set up and maintain your own central syslog installation.
  • Automatic live-migration: The underlying VMs (on Google Compute Engine) are automatically live-migrated before planned maintenance, so you should not see downtime unless unexpected events occur. This is not yet state of the art at every hosting provider, which is why I mention it.

The common theme in all of these properties is that while you could do each of these yourself, it would be very expensive both in terms of actual money for the hardware and underlying services, but also in your time. When using Kubernetes as a small-scale user, I think going with a hosted service such as GKE makes a lot of sense.

Getting askbot up and running

Note that I expect you to have skimmed over the official Container Engine documentation, which also provides walkthroughs of how to set up e.g. WordPress.

I’m going to illustrate running a Docker container by just demonstrating the nginx-related part. It covers the most interesting aspects of how Kubernetes works, and packaging askbot is out of scope for this article. Suffice it to say that you’ll need containers for nginx, askbot, memcached and postgres.

Let’s start with the Replication Controller for nginx, which is a logical unit that creates new Pods (Docker containers) whenever necessary. For example, if the server which holds the nginx Pod goes down, the Replication Controller will create a new one on a different server. I defined the Replication Controller in nginx-rc.yaml:

# vim:ts=2:sw=2:et
apiVersion: v1
kind: ReplicationController
  name: nginx
    env: prod
  replicas: 1
    app: nginx
        app: nginx
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      - name: nginx
        # Use nginx:1 in the hope that nginx will not break their
        # configuration file format without bumping the
        # major version number.
        image: nginx:1
        # Always do a docker pull when starting this pod, as the nginx:1
        # tag gets updated whenever there is a new nginx version.
        imagePullPolicy: Always
        - name: nginx-http
          containerPort: 80
        - name: nginx-https
          containerPort: 443
        - name: nginx-config-storage
          mountPath: /etc/nginx/conf.d
          readOnly: true
        - name: nginx-ssl-storage
          mountPath: /etc/nginx/i3faq-ssl
          readOnly: true
      - name: nginx-config-storage
          secretName: nginx-config
      - name: nginx-ssl-storage
          secretName: nginx-ssl

You can see that I’m referring to two volumes which are called Secrets. This is because static read-only files are not yet supported by Kubernetes. So, in order to bring the configuration and SSL certificates to the docker container, I’ve chosen to create a Secret for each of them. An alternative would be to create my own Docker container based on the official nginx container, and then add my configuration in there. I dislike that approach because it signs me up for additional maintenance: with the Secret injection method, I’ll just use the official nginx container, and nginx upstream will take care of version updates and security updates. For creating the Secret files, I’ve created a small Makefile:

all: nginx-config-secret.yaml nginx-ssl-secret.yaml

nginx-config-secret.yaml: static/
	./ nginx-config >$@
	echo " $(shell base64 -w0 static/" >> $@

nginx-ssl-secret.yaml: static/ static/ static/dhparams.pem
	./ nginx-ssl > $@
	echo " $(shell base64 -w0 static/" >> $@
	echo " $(shell base64 -w0 static/" >> $@
	echo "  dhparams.pem: $(shell base64 -w0 static/dhparams.pem)" >> $@

The script is just a simple file template:

# <name>
cat <<EOT
# vim:ts=2:sw=2:et:filetype=conf
apiVersion: v1
kind: Secret
  name: $1
type: Opaque
  # TODO: once either of the following two issues is fixed,
  # migrate away from secrets for configs:
  # -
  # -

Finally, we will also need a Service definition so that incoming connections can be routed to the Pod, regardless of where it lives. This will be nginx-svc.yaml:

# vim:ts=2:sw=2:et
apiVersion: v1
kind: Service
  name: nginx
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
    app: nginx
  type: LoadBalancer

I then committed these files into the private git repository that GKE provides and used kubectl to bring it all up:

$ make
$ kubectl create -f nginx-config-secret.yaml
$ kubectl create -f nginx-ssl-secret.yaml
$ kubectl create -f nginx-rc.yaml
$ kubectl create -f nginx-svc.yaml

Because we’ve specified type: LoadBalancer in the Service definition, a static IP address will be allocated and can be obtained by using kubectl describe svc nginx. You should be able to access the nginx server on that IP address now.


I like to split the cost of running askbot on GKE into four chunks: the underlying VM instances (biggest chunk), the network load balancing (surprisingly big chunk), storage and all the rest, like network traffic.

Cost: VM instances

The cheapest GKE cluster you can technically run consists of three f1-micro nodes. If you try to start one with fewer nodes, you’ll get an error message: “ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=One f1-micro instance is not sufficient for a cluster with logging and/or monitoring enabled. Please use a larger machine-type, at least three f1-micro instances, or disable both logging and monitoring.”

However, three f1-micro nodes will not be enough to successfully run a web application such as askbot. The f1-micro instances don’t have a reserved CPU core, so you are using left-over capacity, and sometimes that capacity might not be enough. I have seen cases where askbot would not even start up within 10 minutes on an f1-micro instance. I definitely recommend you skip f1-micro instances and directly go with g1-small instances, the next bigger instance type.

For the g1-small instances, you can go as low as two machines. Also be specific about how much disk the machines need, otherwise you will end up with the default disk size of 100 GB, which might be unnecessary for your use case. I’ve used this command to create my cluster:

gcloud container clusters create i3-faq \
  --num-nodes=2 \
  --machine-type=g1-small \

At 0.021 USD/hour for a g1-small instance that you run continuously, the VM instances will add up to about 30 USD/month.

Cost: Network load balancing

As explained before, the only way to get a static IP address for a service to which you can point your DNS records is to use Network Load Balancing. At 0.025 USD/hour, this adds up to about 18 USD/month.

Cost: Storage

While Persistent Disk comes in at 0.04 USD/GB/month, consider that a certain minimum size of a Persistent Disk volume is necessary in order to get a certain performance out of it: the Persistent Disk docs explain how a 250 GB volume size might be required to match the performance of a typical 7200 RPM SATA drive if you’re doing small random reads.

I ended up paying for 196 Gibibyte-months, which adds up to 7.88 USD. This included daily snapshots, of which I created one per day and always kept five around.

Cost: Conclusion

The rest, mostly network traffic, added up to 1 USD, but keep in mind that this instance was just set up and did not receive real user traffic.

In total, I was paying 57 USD/month.

Final thoughts

I thought my whole cloud experience was pretty polished overall, there were no big rough edges. Certainly, Kubernetes is usable right now (at least in its packaged form in Google Container Engine), and I have no doubts it will get better over time.

The features which GKE offers are exactly what I’m looking for, and Google offers them for a fraction of the price that it would cost me to build, and most importantly run, them myself. This applies to Persistent Disk, globally replicated snapshots, centralized logs, automatic live migration and more.

At the same time, I found it slightly too expensive for what is purely a hobby project. Especially the network load balancing increased the bill over the threshold of what I find acceptable for hosting a single application. If GKE becomes usable without the network load balancing or the prices will drop, I’d whole-heartedly recommend it for hobby projects.

by Michael Stapelberg at 20. December 2015 22:35:00

Configuring locales on Linux

While modern desktop environments that are used on Linux set up the locale with support for UTF-8, users who prefer not to run a desktop environment or users who use SSH to work on remote computers occasionally face trouble setting up their locale correctly.

On Linux, the locale is defined via the environment variables LANG and a bunch of environment variables starting with LC_, e.g. LC_MESSAGES.

Your system’s available locales

With locale -a, you can get a list of available locales, e.g.:

$ locale -a

The format of these values is [language[_territory][.codeset][@modifier]], see wikipedia:Locale.

You can configure the locales that should be generated on your system in /etc/locale.gen. On Debian, sudo dpkg-reconfigure locales brings up a front-end for that configuration file which automatically runs locale-gen(8) after you’ve made changes.

The environment variables

  1. LC_ALL overrides all variables starting with LC_ and LANG.
  2. LANG is the fallback when more specific LC_ environment variables are not set.
  3. The individual LC_ variables are documented in The Single UNIX ® Specification.

Based on the above, my advice is to never set LC_ALL, set LANG to the locale you want to use and possibly override specific aspects using the relevant LC_ variables.

As an example, my personally preferred setup looks like this:

unset LC_ALL
export LANG=de_CH.UTF-8
export LC_TIME=en_DK.UTF-8

This slightly peculiar setup sets LANG=de_CH.UTF-8 so that the default value corresponds to Switzerland, where I live.

Then, it specifies via LC_MESSAGES=C that I prefer program output to not be translated. This corresponds to programs having English output in all cases that are relevant to me, but strictly speaking you could come across programs whose untranslated language isn’t English, so maybe you’d prefer LC_MESSAGES=en_US.UTF-8.

Finally, via LC_TIME=en_DK.UTF-8, I configure date/time output to use the ISO8601 output, i.e. YYYY-mm-dd HH:MM:SS and a 24-hour clock.

Displaying the current locale setup

By running locale without any arguments, you can see the currently effective locale configuration:

$ locale

Where to set the environment variables?

Unfortunately, there is no single configuration file that allows you to set environment variables. Instead, each shell reads a slightly different set of configuration files, see wikipedia:Unix_shell#Configuration_files for an overview. If you’re unsure which shell you are using, try using readlink /proc/$$/exe.

Configuring the environment variables in the shell covers logins on the text console and via SSH, but you’ll still need to set the environment variables for graphical sessions. If you’re using a desktop environment such as GNOME, the desktop environment will configure the locale for you. If you’re using a window manager, you should be using an Xsession script (typically found in ~/.xsession or ~/.xinitrc).

To keep the configuration centralized, I recommend you create a file that you can include from both your shell config and your Xsession:

cat > ~/.my_locale_env <<'EOT'
unset LC_ALL
export LANG=de_CH.UTF-8
export LC_TIME=en_DK.UTF-8

echo 'source ~/.my_locale_env' >> ~/.zshrc
sed -i '2isource ~/.my_locale_env' ~/.xsession

Remember to make these settings both on your local machines and on the machines you log into remotely.

Non-interactive SSH sessions

Notably, the above setup only covers interactive sessions. When you run e.g. ssh server ls /tmp, ssh will actually use a non-interactive non-login shell. For most shells, this means that the usual configuration files are not read.

In order for your locale setup to apply to non-interactive SSH commands as well, ensure that your SSH client is configured with SendEnv LANG LC_* to send the environment variables to the SSH server when connecting. On the server, you’ll need to have AcceptEnv LANG LC_* configured. Recent versions of OpenSSH include these settings by default in /etc/ssh/ssh_config and /etc/ssh/sshd_config, respectively. If that’s not the case on your machine, use echo "SendEnv LANG LC_*" >> ~/.ssh/config.

To verify which variables are getting sent, run SSH with the -v flag and look for the line “debug1: Sending environment.”:

$ ssh localhost env
debug1: Sending environment.
debug1: Sending env LC_TIME = en_DK.UTF-8
debug1: Sending env LANG = de_CH.UTF-8
debug1: Sending env LC_MESSAGES = C
debug1: Sending command: env

Debugging locale issues

You can introspect the locale-related environment variables of any process by inspecting the /proc/${PID}/environ file (where ${PID} stands for the process id of the process). As an example, this is how you verify your window manager is using the expected configuration, provided you use i3:

$ tr '\0' '\n' < /proc/$(pidof i3)/environ | grep -e '^\(LANG\|LC_\)'

In order for Unicode text input to work, your terminal emulator (e.g. urxvt) and the program you are using inside it (e.g. your shell, like zsh, or a terminal multiplexer like screen, or a chat program like irssi, etc.) both should use a locale whose codeset is UTF-8.

A good end-to-end test could be to run the following perl command:

$ perl -MPOSIX -Mlocale -e 'print strftime("%c", localtime) . "\n"'
2015-12-20T16:22:09 CET

In case your locales are misconfigured, perl will complain loudly:

$ LC_TIME=en_AU.UTF-8 perl -MPOSIX -Mlocale -e 'print strftime("%c", localtime) . "\n"'
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_TIME = "en_AU.UTF-8",
	LANG = "de_CH.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("de_CH.UTF-8").
Son 20 Dez 2015 16:22:58 CET

by Michael Stapelberg at 20. December 2015 16:30:00

02. December 2015

sECuREs Webseite

Prometheus Alertmanager meta-monitoring

I’m happily using Prometheus for monitoring and alerting since about a year.

Regardless of the monitoring system, one problem that I was uncertain of how to solve it in a good way used to be meta-monitoring: if you have a monitoring system, how do you know that the monitoring system itself is running? You’ll need another level of monitoring/alerting (hence “meta-monitoring”).

Recently, I realized that I could use Gmail for meta-monitoring: Google Apps Script allows users to run JavaScript code periodically that has access to Gmail and other Google apps. That way, I can have a cronjob which looks for emails from my monitoring/alerting infrastructure, and if there are none for 2 days, I get an alert email from that script.

That’s a rather simple way of having an entirely different layer of monitoring code, so that the two monitoring systems don’t suffer from a common bug. Further, the code is running on Google servers, so hardware failures of my monitoring system don’t affect it.

The rest of this article walks you through the setup, assuming you’re already using Prometheus, Alertmanager and Gmail.

Installing the meta-monitoring Google Apps Script

See the “Your first script” instructions for how to create a new Google Apps Script file. Then, use the following code, of course replacing the email addresses of your Alertmanager instance and your own email address:

// vim:ts=2:sw=2:et:ft=javascript
// Licensed under the Apache 2 license.
// © 2015 Google Inc.

// Runs every day between 07:00 and 08:00 in the local time zone.
function checkAlertmanager() {
  // Find all matching email threads within the last 2 days.
  // Should result in 2 threads, unless something went wrong.
  var search_atoms = [
    'subject:daily alert test',
  var threads =' '));
  if (threads.length === 0) {
      'ALERT: alertmanager test mail is missing for 2d',
      'Subject says it all');

In the menu, select “Resources → Current project’s triggers”. Click “Add a new trigger”, select “Time-driven”, “Day timer” and set the time to “7am to 8am”. This will make script run every day between 07:00 and 08:00. The time doesn’t really matter, but you need to specify something. I went for the 07:00-08:00 timespan because that’s shortly before I typically get up, so likely I’ll be presented with the freshest results just when I get up.

You can now either wait a day for the trigger to fire, or you can select the checkAlertmanager function in the “Run” menu to run it right away. You should end up with an email in your inbox, notifying you that the daily alert test is missing, which is expected since we did not configure it yet :).

Configuring a daily test alert in Prometheus

Create a file called dailytest.rules with the following content:

ALERT DailyTest
  IF vector(1) > 0
  FOR 1m
    job = "dailytest",
    summary = "daily alert test",
    description = "daily alert test",

Then, include it in your Prometheus config’s rules section. After restarting Prometheus or sending it a SIGHUP signal, you should see the new alert on the /alerts status page:

prometheus daily alert

Configuring Alertmanager

In your Alertmanager configuration, you’ll need to specify where that alert should be delivered to and how often it should repeat. I suggest you add a notification_config that you’ll use specifically for the daily alert test and nothing else, so that you never accidentally change something:

  group_by: ['alertname']
  group_wait: 30s
  group_interval: 30s
  repeat_interval: 1h
  receiver: team-X-pager

  - match:
      job: dailytest
    receiver: dailytest
    repeat_interval: 1d

- name: 'dailytest'
  - to: ''

Send Alertmanager a SIGHUP signal to make it reload its configuration file. After Prometheus has been running for a minute, you should see the following alert on your Alertmanager’s /alerts status page:

prometheus alertmanager alert

Adding a Gmail filter to hide daily test alerts

Finally, once you verified everything is working, add a filter so that the daily test alerts don’t clutter your Gmail inbox: put “from:( subject:(DailyTest)” into the search box, click the drop-down icon, click “Create filter with this search”, select “Skip the Inbox”.

gmail filter screenshot

by Michael Stapelberg at 02. December 2015 09:50:00

11. November 2015


Stick-Seminar #2

Am Samstag den 05. Dezember 2015, 10-23 Uhr gibt es ein Seminar, bei dem die Teilnehmer lernen eigene Stickmotive zu erstellen, diese auf (fast) beliebige Textilien zu sticken.


Das Seminar ist kostenlos bis auf Verbrauchsmaterial: 20 Cent pro 1000 Stiche und ggf. T-Shirts (5€) / Hoodies (30€) aus dem RZL-Bestand, es können aber selbstverständlich auch eigene Textilien bestickt werden. Wer sich vor dem 01.12. anmeldet bekommt Kuchen und seine Motive zuerst gestickt.

Es lassen sich nicht nur die üblichen Verdächtigen (Pullover, T-Shirts, beliebige sonstige Kleidung) besticken, ich habe auch schon Motive auf Polyester-Microfaser gestickt und daraus Kopfkissenbezüge genäht. Dabei kann ich dann auch helfen, wenn das jemand machen will.

Es sind keine Vorkenntnisse notwendig, es ist aber sinnvoll vor dem Workshop die aktuelle Version von Inkscape zu installieren und die ersten beiden Anleitungen durchzugehen, außerdem gibt es im RZL-Wiki eine Anleitung zu dem Thema, das ist aber alles optional.

Ich bestelle nächsten Dienstag (17.11.) um 20 Uhr Stoffe bei und bei diesem Shop, weil leider beide nicht alle Farben haben, die ich gerne hätte. Wenn jemand etwas mitbestellen will schickt mir eine E-Mail. Für einen Kopfkissenbezug braucht man dann noch einen Reißverschluss, die kann ich besorgen, kosten knapp 6€.

by Alexander at 11. November 2015 00:00:00

05. October 2015


Stickmotive selbst erstellen

Vorlage und Resultat

Die im RZL verwendete Sticksoftware erlaubt den Import von Rastergrafiken, dazu gibt es auch seit April 2012 eine Anleitung.

Diese Methode hat Vor- und Nachteile: Es geht vergleichsweise schnell und erzeugt meistens schöne Resultate, erlaubt aber wenig Kontrolle über das Ergebnis und für perfekte Stickdateien muss man relativ viel nachbessern.

Wenn man die Vorlage als SVG hat kann man Vorteile des Formats ausnutzen:

  • Die Konturen sind exakt bekannt und müssen nicht automatisch vektorisiert werden.
  • Die Reihenfolge der Objekte ist exakt bekannt, so dass Konturen automatisch nach Flächenfüllungen gestickt werden und überlappende Teile in der richtigen Reihenfolge gestickt werden.
  • Man kann in der Vektorgrafik teilweise schon vorgeben, welche Teile mit welchem Stichtyp realisiert werden sollen.

Deshalb benutze ich seit einiger Zeit ausschließlich diese Methode. Bei meinem letzten Stickprojekt habe ich alle Schritte genau protokolliert und eine ausführliche Anleitung geschrieben.

by Alexander at 05. October 2015 00:00:00

02. October 2015

mfhs Blog

51th Symposium on Theoretical Chemistry (Potsdam)

Last week I went to the annual Symposium on Theoretical Chemistry. This time the 51th STC took place from the 20th till the 24th of September in Potsdam.

Unlike last year, where the conference business was all new to me, this year it was more like returning to a place with a familiar atmosphere and many familiar faces. Whilst the conference surely was again a great opportunity to present my work and to learn about theoretical chemistry in the lectures, this time it had the additional aspect of catching up with the people I already met last year. Most of all I enjoyed the poster sessions this year, probably because of the many discussions I had with other researchers and PhD students about their recent advances.

Concerning our project we still were not able to obtain noteworthy results with our current implementation of finite-element Hartree-Fock (see Poster attached below). This is mainly due to the extreme memory requirements the calculation of the Hartree-Fock exchange matrix imposes on the program: Right now we store this whole beast in memory and hence get quadratic scaling with respect to the number of finite elements in the amount of memory we require. In other words even for extremely small test cases we need gigabytes of memory to achieve only very inadequate accuracy.

Together with James Avery, who visited us for a few days in July, we recently managed to come up with a new scheme for implementing Hartree-Fock exchange within the finite-element method. This looks very promising, since it decreases both the memory as well as the computational cost to linear scaling. Right now we struggle with the implementation, however. In November I will visit James in Kopenhagen for 3 weeks. If all goes well we will hopefully overcome these problems and work out a good structure for a FE-HF program, that incorporates everything we learned so far.

Link Licence
Poster P42 STC 2015 Creative Commons License

by mfh at 02. October 2015 00:00:13

01. October 2015

Moredreads Blog

30. September 2015

mfhs Blog

[c¼h] Einführung in die Elektronenstrukturtheorie

The week before my bash scripting course I gave another short talk for the weekly meeting of the Heidelberg Chaostreff NoName e.V. Unlike last time the talk was not concerned with a traditional "Hacker" topic, but much rather I tried to give a brief introduction into my own research field.

Of cause 15 to 20 minutes are not enough to go deep into finite-element Hartree-Fock, so I ended up giving a small introduction into electronic structure theory instead. Questions which were addressed:

  • Why is electronic structure theory useful? What kind of information can I retrieve using it?
  • What are the fundamental physical and chemical concepts that lead to electronic structure theory?
  • What kind of approximations and physical ideas do we need to do in order to get to a method which is used in practice, e.g. the so-called Hartree-Fock method?

The talk was held in German and a recording is available (see below). Please be aware, however, that not everything I mention is, scientifically speaking, correct.

by mfh at 30. September 2015 00:00:33

24. September 2015


Agenda Diplom 2015 im RaumZeitLabor

Nachwuchs-Laboranten lernen Löten

Löten lernen

Wie bereits in den Jahren zuvor haben wir vom RaumZeitLabor auch diesmal wieder am Agenda Diplom der Stadt Mannheim teilgenommen. An insgesamt fünf Terminen besuchten uns je 12 Kinder zum Basteln einer Schatzkiste. Das Besondere: Die Kiste kann nur geöffnet werden, wenn man mit dem richtigen Code anklopft.


Die Schaltung wurde mit einem Arduino Pro Mini aufgebaut. Für die Aufnahme der Klopfsignale kam ein Piezo-Schallwandler zum Einsatz und für den Schließmechanismus ein Micro-Servo. Die Holzschatulle sollte eigentlich mit unserem Lasercutter “FazZz0r” produziert werden. Wegen technischer Schwierigkeiten mit der Laser-Röhre konnte die Maschine aber nicht betrieben werden und befreundete Makerspaces in Stuttgart, Karlsruhe und Wuppertal haben uns prima ausgeholfen.

by Ingo at 24. September 2015 00:00:00

19. September 2015


Hearthstone – Barstone#2: The Grand Tourney

Am 26.9.2015 ab 18h verwandelt sich das RaumZeitLabor wieder in die Taverne »Zum güldenen Einhorn«! Die Veröffentlichung der zweiten HearthStone-Erweiterung »Das Große Turnier« ist der Anlass um in glorreichem Wettstreit den besten HearthStone-Spieler auszumachen.

2. BarStone

Zieht euch einen Stuhl heran, bereitet euer Deck vor und lasst uns eine Runde Hearthstone spielen. Wir laden alle Spieler, egal ob Hearthstone-Neulinge oder Turnierveteranen, ein mit uns einen gemütlichen Abend am digitalen Lagerfeuer zu verbringen. Lerne neue Leute kennen, tausch dich mit anderen Spielern aus und schau dir ein paar Tricks ab! Auch abseits des Turniers wird fleißig Hearthstone gespielt.

Wenn du mitmachen möchtest, lade die kostenlose Hearthstone-App runter und finde dich bis 18h im RaumZeitLabor ein. Eine Voranmeldung per Facebook oder Mail an erleichert uns die Planung.

An unserem Feuer ist noch ein Plätzchen für dich frei!

by Cheatha at 19. September 2015 00:00:00

12. September 2015


Grundlegende Linux-Befehle

Ich bin an meiner Uni für den von der Fachschaft veranstalteten Programmiervorkurs für absolut blutige Anfänger verantwortlich. In diesem Zuge wollte ich - um einen Platz zu haben, auf dem das mal übersichtlich steht - mal die wichtigsten Linux-Befehle, die man als Anfän

12. September 2015 00:00:00

08. September 2015


MRMCD 2015: Basketflaschen

Liebe Sportsfreunde,

MRMCD 2015 ESA-Piktogramm

ganz stolz und getreu dem Motto “Schneller. Höher. Weiter.” können wir euch unser neustes Sportgerät präsentieren, mit dem ihr schneller, höher und weiter kommt: Den Flaschenkorb 2.1 (Next Generation Basketbottle):

MRMCD 2015 Basketballkorb mit Flaschensammler

Vorbei sind die Zeiten der Konferenztische voller leerer Einwegpfandflaschen, denn ab sofort kann jede Flasche bequem vom Sitz aus in den zugehörigen Behälter geworfen werden. Wir warten noch auf die Freigabe der Konstruktion durch den TÜV Süd, bis dahin bitten wir darum von Dunking-Versuchen abzusehen, die Schrauben könnten ausreißen.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

by Alexander at 08. September 2015 00:00:00

29. August 2015


Seminar: Bettwäsche selbst besticken und nähen

Am Samstag den 10. Oktober 2015, 10-23 Uhr gibt es ein Seminar, bei dem die Teilnehmer lernen eigene Stickmotive zu erstellen, diese auf Polyester-Microfaser auszusticken und daraus dann Kopfkissen- oder Deckenbezüge mit verdecktem Reißverschluss zu nähen:

Verdeckter Reißverschluss

Wer teilnehmen will muss sich bis zum 26.09.2015 23:59 per E-Mail bei mir anmelden. Die Zahl der Teilnehmer ist begrenzt, bei mehr Anmeldungen als Plätzen entscheidet das Los.

Für einen normalen Kopfkissenbezug braucht man 2m Stoff (14€) und einen Reißverschluss mit Schieber (5€). Für eine Bettdecke braucht man 5m Stoff (35€) und einen langen Reißverschluss mit Schieber (6€). Für Applikationen braucht man zusätzlich 1m Stoff pro Farbe (7€), da kann man ggf. Reststücke nehmen falls mehrere Teilnehmer die gleichen Farben brauchen.

Es sind keine Vorkenntnisse notwendig, es ist aber sinnvoll vor dem Workshop die aktuelle Version von Inkscape zu installieren und die ersten beiden Anleitungen durchzugehen, außerdem gibt es im RZL-Wiki eine Anleitung zu dem Thema, das ist aber alles optional. Wenn wir am Samstag nicht fertig werden wird der Workshop am Sonntag ab 12 Uhr fortgesetzt.

Die Anmeldung muss folgende Angaben enthalten:

  • Typ (Kopfkissen- oder Deckenbezug).
  • Abmessungen.
  • Gewünschte Trägerfarbe, ggf. Applikationsfarben. Ihr könnt auch selbst Stoff besorgen, wenn euch das lieber ist. Der Microfaser-Stoff hat einen seidigen, glatten Griff und fühlt sich auf der Haut angenehm kühl an.
  • Gewünschte Reißverschluss-Farbe (ist ggf. nicht verfügbar, dann nehme ich eine ähnliche, sieht man am Ende sowieso nicht).
  • Das gewünschte Stickmotiv, damit ich mir das vorher ansehen kann, ob das überhaupt realisierbar ist.

Hier sind zwei Beispiele, bei denen ich nachleuchtendes und normales Garn kombiniert habe:

Nachleuchtendes Stickmotiv, das im Dunkeln die Augen schließt

Bei dem zweiten Beispiel habe ich Körper- und Mähnenfüllung als Applikation realisiert:

Nachleuchtendes Stickmotiv mit Applikationen

by Alexander at 29. August 2015 00:00:00

25. August 2015


MRMCD 2015: Keynote durch Dr. Paolo Ferri

Liebe Sportsfreunde,

MRMCD 2015 ESA-Piktogramm

ganz stolz und getreu dem Motto “Schneller. Höher. Weiter.” können wir euch den besten Speaker präsentieren, mit dem ihr schneller, höher und weiter kommt: Dr. Paolo Ferri! Head of Mission Operations der ESA (Leiter aller unbemannter Missionen der ESA). Er wird uns über das Ende der überaus hoch oben ausgetragenen Rosetta-Mission auf dem schnelle Kometen Tschuri und dem Verbleib des weit geflogenen Ausdauersportlers Philae berichten. Er wird Euch auch verraten, wie Ihr selber Raketenwissenschaft betreiben könnt, z.B. bei ESOC und der ESA. Sicherlich wird er auch Zeit für Eure Fragen haben!

Austragung der Veranstaltung: Freitag 04.09., 17:00

Mittlerweile wurde auch der Rest des Wettkampfkalenders (Fahrplan) veröffentlicht. Was euch dieses Jahr bei den MRMCD erwartet, findet ihr unter:

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

by Alexander at 25. August 2015 00:00:00

16. August 2015

mfhs Blog

Advanced bash scripting block course

Currently I am busy preparing the lecture notes and the exercises for the advanced bash scripting course that I will teach for PhD students of my graduate school. The course will be a block course running from 24th till 28th August and there are still some spaces. So in case you are interested in learning how to write shell scripts in a structured way, you will find more information here. You can obtain the course material either from this page or from my github account mfherbst.

by mfh at 16. August 2015 00:00:52

14. August 2015


MRMCD 2015: Kleiner Einblick in den Fahrplan

Liebe Sportsfreunde,

MRMCD 2015 Plakat

Da der Vorverkauf bald endet, haben wir schon mal eine Liste mit besonders interessanten Vorträgen zusammengestellt, um euch einen kleinen Einblick zu geben, was es alles an Talks auf den MRMCD15 geben wird. So wird es einen Vortrag darüber geben was man alles mit den Daten machen kann, die die Bahn uns eigentlich nicht geben möchte und wie man mit undokumentierten Nahverkehrs-APIs Spaß haben kann. Wenn man dann erstmal alle Daten abgesaugt hat muss man irgendwas damit tun; dazu gibt es einen Vortrag der die parallele Verarbeitung großer Datenmengen mit Hilfe von Apache Spark vorstellt. Natürlich gibt es wieder viele Vorträge rund um die Überwachung durch Fimen und Geheimdienste und und welche Auswirkungen dies auf die eigenen Daten hat. Darüber hinaus gibt es Vorträge von Bioinformatik bis hin zu Reisen zum Mars. Freifunker und Netzwerkbegeisterte kommen bei den MRMCD auch auf ihre Kosten, sei es in Vorträgen zum Themen wie VPNs und Wireless Physical Layer Security oder beim Freifunk Community Meetup Begleitet wird das vielfältige Vortragsprogramm dabei durch mehrere Workshops zum Umgang mit IPv6, Ghostscript, Markdown und ImageMagick. Das ist aber noch längst nicht alles, denn passend zu unserem sportlichen Motto wird es spannende und lustige Wettkämpfe geben. Der sportliche Höhepunkt ist dann am Sonntag die Teilnahme des Team MRMCD an einem Triathlon in Darmstadt.

Wenn du noch kein Ticket hast hast du noch bis zum 16. August Zeit, dir eins zu klicken. Es wird zwar eine Abendkasse geben, aber an der Abendkasse gibt es voraussichtlich keine unserer wie immer großartigen Tassen oder Gadgets.

Die MRMCD finden dieses Jahr vom 4. bis 6. September in der Hochschule Darmstadt statt. Wie jedes Jahr bietet sie ein breites Programm an Vorträgen und eine entspannte Stimmung, um sich mit interessanten Menschen auszutauschen. Alle weiteren Infos gibt es unter

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

by Alexander at 14. August 2015 00:00:00

09. August 2015


Bericht Stick-WE

Am 25./26.07. war im RZL wieder Stickwochenende, dieses Mal mit Besuch aus dem Entropia.

Nervengift hat dann die maximale Breite von unseres größten Rahmen voll ausgenutzt und ein 495mm breites regenbogenfarbenes Einhorn auf seinen Laborkittel gestickt:


Habo war wieder mit seiner Familie da und hat mit Jens an der Kreuzstichfunktionalität von Stickes gearbeitet. Stickes ist eine Software, die aus diversen Fraktalen Stickmotive erzeugen kann. Über eine Eingabemaske für L-Systeme kann man inzwischen fast alle Fraktale sticken, die sich im Lindemayer-System darstellen lassen.

Auch die Drachenkurve, die wir schon auf dem ersten Stick-Wochenende gestickt haben wurde zweimal ausgestickt:


Jiska hat sich auf einen Polo das Logo des diesjährigen CC-Camps gestickt:


Marsi hat mit silbernem Glitzergarn auf Mikrofaser gestickt:


Und ich habe mein neustes Luna-Motiv auf Stoff gestickt und damit einen Kopfkissenbezug genäht.

Dabei habe ich nachleuchtendes und Seiden-Garn so kombiniert, dass Luna die Augen schließt sobald es dunkel wird:


Am Sonntag Abend, nachdem die meisten gegangen waren haben Jens und ich noch einen Hoodie mit dem Camplogo in groß (98k Stiche) bestickt.

Außerdem habe ich ein bischen an dem Garnständer für kleine Konen weitergebaut:

Garnständer für kleine Konen

Inzwischen ist er fertig und hängt im Fablab an der Wand.

Insgesamt haben wir 444.443 Stiche gemacht und dabei jede Menge Spaß gehabt. Vielen Dank auch den Besuchern, die tolle Ideen mitgebracht haben und mit denen wir uns über Sticktechniken austauschen konnten.

Spätestens nächstes Jahr werden wir die Veranstaltung wiederholen, bis dahin wünschen wir unseren Gästen wenig Fadenbrüche und immer genug Unterfaden in der Maschine.

Euer StickZeitLabor

by Alexander at 09. August 2015 00:00:00

07. August 2015

mfhs Blog

Tor and Tor Browser update scripts

A while ago I setup a github account in order to contribute a BugFix to the dealii Finite Element software I am using for my PhD project. Since then not very much had happened to the account until I decided to add the repository update-scripts in which I plan to maintain the scripts I use to update some 3rd party software on my LinuX machines.

Currently not very many applications are supported. Pretty much only Tor and the Tor Browser, both applications from Since both are updated fairly often, the packages of my distribution (Debian) are not recent enough. Especially the Tor Browser is installed on pretty much all my machines: It is really easy to use and perfect if one just needs to do some quick checks on the internet. So I decided to develop scripts that automatically download, cryptographically check and install the current Tor Browser to the user's HOME directory. If you are interested to use or contribute to this project, feel free to download it below or visit the project page on github.

Link Licence
update-scripts repository GNU GPL v3

by mfh at 07. August 2015 00:00:17

29. July 2015

Mero’s Blog

Backwards compatibility in go

tl;dr: There are next to no "backwards compatible API changes" in go. You should explicitely name your compatibility-guarantees.

I really love go, I really hate vendoring and up until now I didn't really get, why anyone would think go should need something like that. After all, go seems to be predestined to be used with automatically checked semantic versioning. You can enumerate all possible changes to an API in go and the list is quite short. By looking at vcs-tags giving semantic versions and diffing the API, you can automatically check that you never break compatibility (the go compiler and stdlib actually do something like that). Heck, in theory you could even write a package manager that automatically (without any further annotations) determines the latest version of a package that still builds all your stuff or gives you the minimum set of packages that need changes to reconcile conflicts.

This thought lead me to contemplate what makes an API change a breaking change. After a bit of thought, my conclusion is that almost every API change is a breaking change, which might surprise you.

For this discussion we first need to make some assumptions about what constitutes breakage. We will use the go1 compatibility promise. The main gist is: Stuff that builds before is guaranteed to build after. Notable exceptions (apart from necessary breakages due to security or other bugs) are unkeyed struct literals and dot-imports.

[edit] I should clarify, that whenever I talk about an API-change, I mean your exported API as defined by Code (as opposed to comments/documentation). This includes the public identifiers exported by your package, including type information. It excludes API-requirements specified in documentation, like on io.Writer. These are just too complex to talk about in a meaningful way and must be dealt with separately anyway. [/edit]

So, given this definition of breakage, we can start enumerating all the possible changes you could do to an API and check whether they are breaking under the definition of the go1 compatibility promise:

Adding func/type/var/const at package scope

This is the only thing that seems to be fine under the stability guarantee. It turns out the go authors thought about this one and put the exception of dot-imports into the compatibility promise, which is great.

dot-imports are imports of the form . import "foo". They import every package-level identifier of package foo into the scope of the current file.

Absence of dot-imports means, every identifier at your package scope must be referenced with a selector-expression (i.e. foo.Bar) which can't be redeclared by downstream. It also means that you should never use dot-imports in your packages (which is a bad idea for other reasons too). Treat dot-imports as a historic artifact which is completely deprecated. An exception is the need to use a separate foo_test package for your tests to break dependency cycles. In that case it is widely deemed acceptable to . import "foo" to save typing and add clarity.

Removing func/type/var/const at package scope

Downstream might use the removed function/type/variable/constant, so this is obviously a breaking change.

Adding a method to an interface

Downstream might want to create an implementation of your interface and try to pass it. After you add a method, this type doesn't implement your interface anymore and downstreams code will break.

Removing a method from an interface

Downstream might want to call this method on a value of your interface type, so this is obviously a breaking change.

Adding a field to a struct

This is perhaps surprising, but adding a field to a struct is a breaking change. The reason is, that downstream might embed two types into a struct. If one of them has a field or method Bar and the other is a struct you added the Field Bar to, downstream will fail to build (because of an ambiguous selector expression).

So, e.g.:

// foo/foo.go
package foo

type Foo struct {
    Foo string
    Bar int // Added after the fact

// bar/bar.go
package bar

type Baz struct {
    Bar int

type Spam struct {

func Eggs() {
    var s Spam
    s.Bar = 42 // ambiguous selector s.Bar

This is what the compatibility might refer to with the following quote:

Code that uses unkeyed struct literals (such as pkg.T{3, "x"}) to create values of these types would fail to compile after such a change. However, code that uses keyed literals (pkg.T{A: 3, B: "x"}) will continue to compile after such a change. We will update such data structures in a way that allows keyed struct literals to remain compatible, although unkeyed literals may fail to compile. (There are also more intricate cases involving nested data structures or interfaces, but they have the same resolution.)

(emphasis is mine). By "the same resolution" they might refer to only accessing embedded Fields via a keyed selector (so e.g. s.Baz.Bar in above example). If so, that is pretty obscure and it makes struct-embedding pretty much useless. Every usage of a field or method of an embedded type must be explicitly Keyed, which means you can just not embed it after all. You need to write the selector and wrap every embedded method anyway.

I hope we all agree that type embedding is awesome and shouldn't need to be avoided :)

Removing a field from a struct

Downstream might use the now removed field, so this is obviously a breaking change.

Adding a method to a type

The argument is pretty much the same as adding a field to a struct: Downstream might embed your type and suddenly get ambiguities.

Removing a method from a type

Downstream might call the now removed method, so this is obviously a breaking change.

Changing a function/method signature

Most changes are obviously breaking. But as it turns out you can't do any change to a function or method signature. This includes adding a variadic argument which looks backwards compatible on the surface. After all, every call site will still be correct, right?

The reason is, that downstream might save your function or method in a variable of the old type, which will break because of nonassignable types.


It looks to me like anything that isn't just adding a new Identifier to the package-scope will potentially break some downstream. This severely limits the kind of changes you can do to your API if you want to claim backwards compatibility.

This of course doesn't mean that you should never ever make any changes to your API ever. But you should think about it and you should clearly document, what kind of compatibility guarantees you make. When you do any changes named in this document, you should check your downstreams, whether they are affected by it. If you claim a similar level of compatibility as the go standard library, you should definitely be aware of the implications and what you can and can't do.

We, the go community, should probably come up with some coherent definition of what changes we deem backwards compatible and which we don't. A tool to automatically looks up all your (public) importerts on, downloads the latest version and tries to build them with your changes should be fairly simple to write in go (and may even already exist). We should make it a standard check (like go vet and golint) for upstream package authors to do that kind of thing before push to prevent frustrated downstreams.

Of course there is still the possibility, that my reading of the go1 compatibility promise is wrong or inaccurate. I would welcome comments on that, just like on everything else in this post :)

29. July 2015 01:10:11

18. July 2015

sECuREs Webseite

ViewSonic VX2475Smhl-4K HiDPI display on Linux

I have been using a Dell UP2414Q monitor for a little over a year now. The Dell UP2414Q was the first commercially available display that qualified as what Apple calls a Retina Display, meaning it has such a high resolution that you cannot see the individual pixels in normal viewing distance. In more technical terms, this is called a HiDPI display. To be specific, Dell’s UP2414Q has a 527mm wide and 296mm high screen, hence it has 185 dpi (3840 px / (527 mm / 25.4 mm)). I configured my system to use 192 dpi instead of the actual 185 dpi, because 192 is a clean multiple of 96 dpi, so scaling gets easier for software.

The big drawback of the Dell UP2414Q, and all other 4K displays with sufficiently high dpi, is that the display uses multiple scalers (also called multiple tiles) internally. This makes use of a DisplayPort feature called Multiple Stream Transport (MST): in layman’s terms, the display communicates to the graphics card that there are actually two displays with a resolution of 1920x1080 px each, both connected via the same DisplayPort cable. The graphics card then needs to split each frame into two halves and send them over the DisplayPort connection to the monitor. The reason all display vendors used this technique is that at the time, there simply were no scalers on the market which supported a resolution of 3840x2160 px.

Driver support for tiled displays

The problem with MST is that it’s poorly supported in the Linux ecosystem: for the longest time, the only driver supporting it at all was the closed-source nVidia driver, and you had to live without RandR when using it. With linux 4.1, MST support was added for the radeon driver, but I’m not sure if that is all that’s necessary to support 4K MST displays as there are other use-cases for MST. The intel driver still doesn’t have any MST support whatsoever, as of now (linux 4.1).

(RandR) Software support for tiled displays

Regardless of the driver you are using, you’ll need the very latest RandR 1.5, otherwise software will just see multiple monitors instead of one big monitor. Keith Packard published a blog post with a proposal to address this shortcoming, and the actual implementation work was included in the randrproto 1.5 release at 2015-05-17. I think it will take a while until all relevant software is updated, with your graphics driver, the X server and your desktop environment being the most important pieces of software. It’s unclear to me when/if Wayland will support tiled 4K displays.

Having to disable RandR means you’ll be unable to use tools like redshift to dim your monitor’s backlight, and you won’t be able to reconfigure or rotate your monitor without restarting your X session. Especially on a laptop, this is a big deal.

The ViewSonic VX2475Smhl-4K

I’m not sure why ViewSonic chose such a long product name, especially when comparing it with the competition’s names like HP z24s or BenQ BL2420U. This makes it pretty hard to talk about the product in real life, because nobody is going to remember that cryptic name. In that sense, it’s a good thing I won’t need to recommend this product.

Let’s start with the positive, and the main reason why I bought this monitor: the screen itself is great. It entirely fulfills my needs for extended periods of office work and occasionally watching a movie. I don’t play games on this monitor. With regards to connectivity, it comes with 2 HDMI ports (one of which is MHL-capable for connecting smartphones and tablets) and 1 DisplayPort. Since most graphics cards don’t support HDMI 2.0 yet (which you need for the native resolution of 3840x2160px at 60Hz), I am driving the monitor using DisplayPort, which works perfectly fine so far.

Unfortunately, the screen is the only good thing about this entire monitor. If you are using it in an office setting, you might be used to a lot more comfort. Here is a list of shortcomings, sorted by how severe I think each issue is:

  1. The monitor does not contain a USB hub at all. This is a big shortcoming I can’t understand at all. From plugging in wireless receivers for mouse/keyboard over Yubikeys for second-factor authentification to the occasional USB thumb drives, I don’t understand how anyone would not see the lack of USB ports as a big minus.
  2. The case of the monitor is painted in a glossy black, reflecting light. This is ironic, since the screen itself is matte, but you still have light reflections in your field of vision. I’ll need to see what I can do about that.
  3. The monitor feels a lot cheaper than other monitors. The stand it comes with is flimsy and does not allow for rotating the monitor or adjusting the height at all, so I’ve already ordered an Ergotron LX 45-241-026 to replace the stand. The buttons to power off/on and navigate the on-screen display don’t feel comfortable, and the power LED is bright blue, reflecting multiple times in the glossy stand.

Using the ViewSonic VX2475Smhl-4K with Linux

Using DisplayPort with the nouveau open-source driver

Connecting the ViewSonic VX2475Smhl-4K to my Gainward GeForce GTX 660 using DisplayPort works perfectly fine with the nouveau open-source driver version 1.0.11, meaning the driver detects the full native resolution of 3840x2160 px with a refresh rate of 60 Hz and does not use YUV 4:2:0 compression. As I understand it, you need to have a graphics card with DisplayPort 1.2 or newer in order to achieve 3840x2160 px at 60 Hz. Here’s the output of xrandr:

$ xrandr
Screen 0: minimum 320 x 200, current 3840 x 2160, maximum 8192 x 8192
DVI-I-1 disconnected (normal left inverted right x axis y axis)
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-1 connected 3840x2160+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   3840x2160     60.00*+  30.00    25.00    24.00    29.97    23.98  
   1920x1080     60.00    50.00    59.94  
   1920x1080i    60.00    50.00    59.94  
   1600x1200     60.00  
   1680x1050     59.95  
   1400x1050     59.98  
   1600x900      59.98  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.08    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DVI-D-1 disconnected (normal left inverted right x axis y axis)

Using DisplayPort with the nVidia closed-source driver

Connecting the ViewSonic VX2475Smhl-4K to my Gainward GeForce GTX 660 using DisplayPort works perfectly fine with the nVidia closed-source driver version 352.21 (haven’t tested it with other versions), meaning the driver detects the full native resolution of 3840x2160 px with a refresh rate of 60 Hz and does not use YUV 4:2:0 compression. As I understand it, you need to have a graphics card with DisplayPort 1.2 or newer in order to achieve 3840x2160 px at 60 Hz. Here’s the output of xrandr:

Screen 0: minimum 8 x 8, current 3840 x 2160, maximum 16384 x 16384
DVI-I-0 disconnected primary (normal left inverted right x axis y axis)
DVI-I-1 disconnected (normal left inverted right x axis y axis)
HDMI-0 disconnected (normal left inverted right x axis y axis)
DP-0 disconnected (normal left inverted right x axis y axis)
DVI-D-0 disconnected (normal left inverted right x axis y axis)
DP-1 connected 3840x2160+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   3840x2160     60.00*+  29.97    25.00    23.98  
   1920x1080     60.00    59.94    50.00    60.00    50.04  
   1680x1050     59.95  
   1600x1200     60.00  
   1600x900      60.00  
   1440x900      59.89  
   1400x1050     59.98  
   1280x1024     75.02    60.02  
   1280x720      60.00    59.94    50.00  
   1024x768      75.03    70.07    60.00  
   800x600       75.00    72.19    60.32    56.25  
   720x576       50.00  
   720x480       59.94  
   640x480       75.00    72.81    59.94    59.93  

And here’s the relevant block of verbose log output generated by the nVidia driver in /var/log/Xorg.0.log when starting X11 with -logverbose 6:

[ 87934.515] (II) NVIDIA(GPU-0): --- Building ModePool for ViewSonic VX2475 SERIES (DFP-4) ---
[ 87934.515] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[ 87934.515] (II) NVIDIA(GPU-0):     Mode Source: EDID
[ 87934.515] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[ 87934.515] (II) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[ 87934.515] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[ 87934.515] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[ 87934.515] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[ 87934.515] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[ 87934.515] (II) NVIDIA(GPU-0):       H/V Polarity     : +/-
[ 87934.515] (II) NVIDIA(GPU-0):     Viewport                 3840x2160+0+0
[ 87934.515] (II) NVIDIA(GPU-0):       Horizontal Taps        0
[ 87934.515] (II) NVIDIA(GPU-0):       Vertical Taps          0
[ 87934.515] (II) NVIDIA(GPU-0):       Base SuperSample       x1
[ 87934.515] (II) NVIDIA(GPU-0):       Base Depth             32
[ 87934.515] (II) NVIDIA(GPU-0):       Distributed Rendering  1
[ 87934.515] (II) NVIDIA(GPU-0):       Overlay Depth          32
[ 87934.515] (II) NVIDIA(GPU-0):     Mode "3840x2160_60" is valid.

Using HDMI with the nVidia closed-source driver

In order to drive the ViewSonic VX2475Smhl-4K with its native resolution of 3840x2160 px at a refresh rate of 60 Hz, you’ll need to have a graphics card that supports HDMI 2.0. As of 2015-07-17, the only cards I can find that feature HDMI 2.0 are nVidia’s Maxwell cards (NV110), e.g. the models GeForce GTX 960, 970 or 980. These models need a signed firmware blob, which nVidia has not yet released, hence you cannot use them with the open-source nouveau driver at all. I’ll not buy them until this issue is resolved.

Even though I know that it’s not supported, I was curious to see what happens when I try to connect the display to my GeForce GTX 660, which only has HDMI 1.4.

With the nVidia closed-source driver version 346.47, by default, you will end up with a resolution of 1920x1080 px. The X11 logfile /var/log/Xorg.0.log contains the following verbose log output when starting X11 with -logverbose 6:

[  8265.425] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160":
[  8265.425] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[  8265.425] (II) NVIDIA(GPU-0):     Mode Source: EDID
[  8265.425] (II) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[  8265.425] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[  8265.425] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[  8265.425] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[  8265.425] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[  8265.425] (II) NVIDIA(GPU-0):       H/V Polarity     : +/-
[  8265.425] (WW) NVIDIA(GPU-0):     Mode is rejected: PixelClock (533.2 MHz) too high for
[  8265.426] (WW) NVIDIA(GPU-0):     Display Device (Max: 340.0 MHz).

[  8265.429] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160":
[  8265.429] (II) NVIDIA(GPU-0):     3840 x 2160 @ 30 Hz
[  8265.429] (II) NVIDIA(GPU-0):     Mode Source: EDID
[  8265.429] (II) NVIDIA(GPU-0):       Pixel Clock      : 296.70 MHz
[  8265.429] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 4016
[  8265.429] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 4104, 4400
[  8265.429] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2168
[  8265.429] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2178, 2250
[  8265.429] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[  8265.429] (WW) NVIDIA(GPU-0):     Mode is rejected: Mode requires YUV 4:2:0 compression.

When setting Option "ModeValidation" "AllowNonEdidModes" and configuring the custom modeline Modeline "3840x2160" 307.00 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync, you can get a resolution of 3840x2160 px, but a refresh rate of only 30 Hz. Such a low refresh rate is only okay for watching movies — any sort of regular computer work is very inconvenient, as the mouse pointer is severely jumpy/lagging.

Since driver version 349.12, nVidia added support for YUV 4:2:0 compression. See this anandtech article about how nVidia cards achieve 4k@60 Hz over HDMI 1.4 and the wikipedia article on chroma subsampling in general.

I upgraded my driver to version 352.21, and indeed, by default, it will now drive the monitor with 3840x2160 px at a refresh rate of 60 Hz, but using YUV 4:2:0 compression. This compression is immediately visible as the picture quality is so much worse. You can even see it in simple things such as a GMail tab. To me, it looks similar to when you accidentally misconfigure your system to use 16-bit colors instead of 24-bit colors. I recommend you try to avoid YUV 4:2:0 compression as much as possible, unless maybe if you’re just trying to watch movies and aren’t interested in best quality.

With version 352.21, the X11 logfile /var/log/Xorg.0.log contains the following verbose log output when starting X11 with -logverbose 6:

[   123.402] (WW) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[   123.402] (WW) NVIDIA(GPU-0):     Mode Source: EDID
[   123.402] (WW) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[   123.402] (WW) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[   123.402] (WW) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[   123.402] (WW) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[   123.402] (WW) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[   123.402] (WW) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[   123.402] (WW) NVIDIA(GPU-0):       H/V Polarity     : +/-
[   123.402] (WW) NVIDIA(GPU-0):     Mode is rejected: PixelClock (533.2 MHz) too high for
[   123.402] (WW) NVIDIA(GPU-0):     Display Device (Max: 340.0 MHz).
[   123.402] (WW) NVIDIA(GPU-0):     Mode "3840x2160_60" is invalid.

[   123.406] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[   123.406] (II) NVIDIA(GPU-0):     Mode Source: EDID
[   123.406] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[   123.406] (II) NVIDIA(GPU-0):       Pixel Clock      : 593.41 MHz
[   123.406] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 4016
[   123.406] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 4104, 4400
[   123.406] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2168
[   123.406] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2178, 2250
[   123.406] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[   123.406] (II) NVIDIA(GPU-0):     Viewport                 1920x2160+0+0
[   123.406] (II) NVIDIA(GPU-0):       Horizontal Taps        0
[   123.406] (II) NVIDIA(GPU-0):       Vertical Taps          0
[   123.406] (II) NVIDIA(GPU-0):       Base SuperSample       x1
[   123.406] (II) NVIDIA(GPU-0):       Base Depth             32
[   123.406] (II) NVIDIA(GPU-0):       Distributed Rendering  1
[   123.406] (II) NVIDIA(GPU-0):       Overlay Depth          32
[   123.406] (II) NVIDIA(GPU-0):     Mode "3840x2160_60" is valid.


With its single scaler, the ViewSonic VC2475Smhl-4K works just fine on Linux when using DisplayPort, which is a big improvement over the finicky Dell UP2414Q. Everything else about this monitor is pretty bad, so I would recommend you have a close look at the competition’s models, which as of 2015-07-17 are the HP z24s and the BenQ BL2420U. Neither of these are currently available in Switzerland, so it will take a while until I have the possibility to review either of them.

by Michael Stapelberg at 18. July 2015 00:30:00

17. July 2015

Mero’s Blog

Lazy evaluation in go

tl;dr: I did lazy evaluation in go

A small pattern that is usefull for some algorithms is lazy evaluation. Haskell is famous for making extensive use of it. One way to emulate goroutine-safe lazy evaluation is using closures and the sync-package:

type LazyInt func() int

func Make(f func() int) LazyInt {
    var v int
    var once sync.Once
    return func() int {
        once.Do(func() {
            v = f()
            f = nil // so that f can now be GC'ed
        return v

func main() {
    n := Make(func() { return 23 }) // Or something more expensive…
    fmt.Println(n())                // Calculates the 23
    fmt.Println(n() + 42)           // Reuses the calculated value

This is not the fastest possible code, but it already has less overhead than one would think (and it is pretty simple to deduce a faster implementation from this). I have implemented a simple command, that generates these implementations (or rather, more optimized ones based on the same idea) for different types.

This is of course just the simplest use-case for lazynes. In practice, you might also want Implementations of Expressions

func LazyAdd(a, b LazyInt) LazyInt {
    return Make(func() { return a() + b() })

or lazy slices (slightly more complicated to implement, but possible) but I left that for a later improvement of the package (plus, it makes the already quite big API even bigger) :)

17. July 2015 19:31:10

02. May 2015


Setting Trackpoint speed via systemd

Disclaimer This post does not offer the truth in how trackpoint speed and sensitivity should be set. It just shows how I am doing it at the moment. Since this is the first time I wrote a systemd-service-file, or anything like this, I am very happy to receive any feedback on mistakes I may have made or anything else. Why trackpoint speed/sensitivity matters (to me) I am currently using a Thinkpad X220i with a heavily keyboard-based setup.

02. May 2015 00:00:00

13. April 2015

Mero’s Blog

Difficulties making SQL based authentication resilient against timing attacks

I've been thinking about how to do an authentication scheme, that uses some kind of relational database (it doesn't matter specifically, that the database is relational, the concerns should pretty much apply to all databases) as a backing store, in a way that is resilient against timing side-channel attacks and doesn't leak any data about which usernames exist in the system and which don't.

The first obvious thing is, that you need to do a constant time comparison of password-hashes. Luckily, most modern crypto libraries should include something like that (at least go's bcrypt implementation comes with that).

But now the question is, how you prevent enumerating users (or checking for existence). A naive query will return an empty result set if the user does not exists, so again, obviously, you need to compare against some password, even if the user isn't found. But just doing, for example

if result.Empty {
    // Compare against a prepared hash of an empty password, to have constant
    // time check.
    bcrypt.CompareHashAndPassword(HashOfEmptyPassword, enteredPassword)
} else {
    bcrypt.CompareHashAndPassword(result.PasswordHash, enteredPassword)

won't get you very far. Because (for example) the CPU will predict either of the two branches (and the compiler might or might not decide to "help" with that), so again an attacker might be able to distinguish between the two cases. The best way, to achieve resilience against timing side-channels is to make sure, that your control flow does not depend on input data at all. Meaning no branch or loop should ever take in any way into account, what is actually input into your code (including the username and the result of the database query).

So my next thought was to modify the query to return the hash of an empty password as a default, if no user is found. That way, your code is guaranteed to always get a well-defined bcrypt-hash from the database and your control flow does not depend on whether or not the user exists (and an empty password can be safely excluded in advance, as returning early for that does not give any new data to the attacker).

Which sounds well, but now the question is, if maybe the timing of your database query tells the attacker something. And this is where I hit a roadblock: If the attacker knows enough about your code (i.e. what database engine you are using, what machine you are running on and what kind of indices your database uses) they can potentially enumerate users by timing your database queries. To illustrate: If you would use a simple linear list as an index, a failed search has to traverse the whole list, whereas a successfull search will abort early. The same issue exists with balanced trees. An attacker could potentially hammer your application with unlikely usernames and measure the mean time to answer. They can then test individual usernames and measure if the time to answer is significantly below the mean for failures, thus enumerating usernames.

Now, I haven't tested this for practicality yet (might be fun) and it is pretty likely that this can't be exploited in reality. Also, the possibility of enumerating users isn't particularly desirable, but it is also far from a security meltdown of your authentication-system. Nevertheless, the idea that this theoretical problem exists makes me uneasy.

An obvious fix would be to make sure, that every query always has to search the complete table on every lookup. I don't know if that is possible, it might be just trivial by not giving a limit and not marking the username column as unique, but it might also be hard and database-dependent because there will still be an index over this username column which might still create the same kind of issues. There will also likely still be a variance, because we basically just shifted the condition from our own code into the DBMS. I have simply no idea.

So there you have it. I am happy to be corrected and pointed to some trivial design. I will likely accept the possibity of being vulnerable here, as the systems I am currently building aren't that critical. But I will probably still have a look at how other projects are handling this. And maybe if there really is a problem in practice.

13. April 2015 02:49:53

10. April 2015


Connecting an iPod Shuffle

Backstory A few years ago, driving home from a night drinking in the old town of Heidelberg, I stumbled upon an iPod shuffle of the 4th generation. Being the drunk and stupid guy I was back then, I didn’t think much, and just took it home - just to wake up the other day recognizing I was now the proud owner of an iPod. Yay. Of course, now, that I had that thing, I wanted to use it.

10. April 2015 00:00:00

26. March 2015

mfhs Blog

A short return to Cambridge

About two weeks ago, on Friday the 13th of March, I was invited back to the Chemistry Department at the University of Cambridge. As part of their Theoretical Chemistry Informal Seminar talk series I was asked to give a short talk (Abstract) about my current research. Needless to say that I am very grateful that Lucy Colwell and the other organisers provided me with this great opportunity to share some of the insights we got in the past year. Over afternoon tea I also had the chance to catch up with some familiar faces from the theory section and meet some new researchers as well. With great pleasure I realised that very little has changed in Cambridge when it comes to the open and welcoming atmosphere that I enjoyed so much during my undergrad days.

The first part of my presentation is very similar to the one I prepared for our group seminar in Kleinwalsertal. In the section titled "Building the matrices" I describe our approach to build the relevant stiffness and mass matrices and what difficulties arise. At this point we cannot really provide acceptable solutions to these problems, however.

Link Licence
Invited Talk Cambridge 13.03.2015 Creative Commons License

by mfh at 26. March 2015 00:00:50