Planet NoName e.V.

21. November 2016

sECuREs Webseite

Gigabit NAS (running CoreOS)

tl;dr: I upgraded from a qnap TS-119P to a custom HTPC-like network storage solution. This article outlines what my original reasoning was for the qnap TS-119P, what I learnt, and with what solution precisely I replaced the qnap.

A little over two years ago, I gave a (German) presentation about my network storage setup (see video or slides). Given that video isn’t a great consumption format when you’re pressed on time, and given that a number of my readers might not speak German, I’ll recap the most important points:

  • I reduced the scope of the setup to storing daily backups and providing media files via CIFS.
    I have come to prefer numerous smaller setups over one gigantic setup which offers everything (think a Turris Omnia acting as a router, mail server, network storage, etc.). Smaller setups can be debugged, upgraded or rebuilt in less time. Time-boxing activities has become very important to me as I work full time: if I can’t imagine finishing the activity within 1 or 2 hours, I usually don’t get started on it at all unless I’m on vacation.
  • Requirements: FOSS, relatively cheap, relatively low power usage, relatively high redundancy level.
    I’m looking not to spend a large amount of money on hardware. Whenever I do spend a lot, I feel pressured to get the most out of my purchase and use the hardware for many years. However, I find it more satisfying to be able to upgrade more frequently — just like the update this article is describing :).
    With regards to redundancy, I’m not content with being able to rebuild the system within a couple of days after a failure occurs. Instead, I want to be able to trivially switch to a replacement system within minutes. This requirement results in the decision to run 2 separate qnap NAS appliances with 1 hard disk each (instead of e.g. a RAID-1 setup).
    The decision to go with qnap as a vendor came from the fact that their devices are pretty well supported in the Debian ecosystem: there is a network installer for it, the custom hardware is supported by the qcontrol tool and one can build a serial console connector.
  • The remainder of the points is largely related to software, and hence not relevant for this article, as I’m keeping the software largely the same (aside from the surrounding operating system).

What did not work well

Even a well-supported embedded device like the qnap TS-119P requires too much effort to be set up:

  1. Setting up the network installer is cumbersome.
  2. I contributed patches to qcontrol for setting the wake on LAN configuration on the qnap TS-119P’s controller board and for systemd support in qcontrol.
  3. I contributed my first ever patch to the Linux kernel for wake on LAN support for the Marvell MV643xx series chips.
  4. I ended up lobbying Debian to enable the CONFIG_MARVELL_PHY kernel option, while personally running a custom kernel.

On the hardware side, to get any insight into what the device is doing, your only input/output option is a serial console. To get easy access to that serial console, you need to solder an adapter cable for the somewhat non-standard header which they use.

All of this contributed to my impression that upgrading the device would be equally involved. Logically speaking, I know that this is unlikely since my patches are upstream in the Linux kernel and in Debian. Nevertheless, I couldn’t help but feel like it would be hard. As a result, I have not upgraded my device ever since I got it working, i.e. more than two years ago.

The take-away is that I now try hard to:

  1. use standard hardware which fits well into my landscape
  2. use a software setup which has as few non-standard modifications as possible and which automatically updates itself.

What I would like to improve

One continuous pain point was how slow the qnap TS-119P was with regards to writing data to disk. The slowness was largely caused by full-disk encryption. The device’s hardware accelerator turned out to be useless with cryptsetup-luks’s comparatively small hard-coded 4K block size, resulting in about 6 to 10 MB/s of throughput.

This resulted in me downloading files onto the SSD in my workstation and then transferring them to the network storage. Doing these downloads in a faster environment circumvents my somewhat irrational fears about the files becoming unavailable while you are downloading them, and allows me to take pleasure in how fast I’m able to download things :).

The take-away is that any new solution should be as quick as my workstation and network, i.e. it should be able to write files to disk with gigabit speed.

What I can get rid of

While dramaqueen and autowake worked well in principle, they turned out to no longer be very useful in my environment: I switched from a dedicated OpenELEC-based media center box to using emby and a Chromecast. emby is also nice to access remotely, e.g. when watching series at a friend’s place, or watching movies while on vacation or business trips somewhere. Hence, my storage solution needs to be running 24/7 — no more automated shutdowns and wake-up procedures.

What worked well

Reducing the scope drastically in terms of software setup complexity paid off. If it weren’t for that, I probably would not have been able to complete this upgrade within a few mornings/evenings and would likely have pushed this project out for a long time.

The new hardware

I researched the following components back in March, but then put the project on hold due to time constraints and to allow myself some more time to think about it. I finally ordered the components in August, and they still ranked best with regards to cost / performance ratio and fulfilling my requirements.

Price Type Article
43.49 CHF Case SILVERSTONE Sugo SST-SG05BB-LITE
60.40 CHF Mainboard ASROCK AM1H-ITX
38.99 CHF CPU AMD Athlon 5350 (supports AES-NI)
20.99 CHF Cooler Alpine M1-Passive
32.80 CHF RAM KINGSTON ValueRAM, 8.0GB (KVR16N11/8)
28.70 CHF PSU Toshiba PA3822U-1ACA, PA3822E-1AC3, 19V 2,37A
(To19V_2.37A_5.5x2.5)
0 CHF System disk (OCZ-AGILITY3 60G)
You’ll need to do your own research.
Currently, my system uses 5GB of space,
so chose the smallest SSD you can find.
225.37 CHF total sum

For the qnap TS-119P, I paid 226 CHF, so my new solution is a tad more expensive. However, I had the OCZ-AGILITY3 lying around from a warranty exchange, so bottomline, I paid less than what I had paid for the previous solution.

I haven’t measured this myself, but according to the internet, the setups have the following power consumption (without disks):

  • The qnap TS-119P uses ≈7W, i.e. ≈60 CHF/year for electricity.
  • The AM1H-ITX / Athlon 5350 setup uses ≈20W, i.e. ≈77 CHF/year for electricity.

In terms of fitting well into my hardware landscape, this system does a much better job than the qnap. Instead of having to solder a custom serial port adapter, I can simply connect a USB keyboard and an HDMI or DisplayPort monitor and I’m done.

Further, any linux distribution can easily be installed from a bootable USB drive, without the need for any custom tools or ports.

Full-disk encryption performance

# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       338687 iterations per second
PBKDF2-sha256     228348 iterations per second
PBKDF2-sha512     138847 iterations per second
PBKDF2-ripemd160  246375 iterations per second
PBKDF2-whirlpool   84891 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b   468.8 MiB/s  1040.9 MiB/s
     aes-cbc   256b   366.4 MiB/s   885.8 MiB/s
     aes-xts   256b   850.8 MiB/s   843.9 MiB/s
     aes-xts   512b   725.0 MiB/s   740.6 MiB/s

Network performance

As the old qnap TS-119P would only sustain gigabit performance using IPv4 (with TCP checksum offloading), I was naturally relieved to see that the new solution can send packets at gigabit line rate using both protocols, IPv4 and IPv6. I ran the following tests inside a docker container (docker run --net=host -t -i debian:latest):

# nc 10.0.0.76 3000 | dd of=/dev/null bs=5M
0+55391 records in
0+55391 records out
416109464 bytes (416 MB) copied, 3.55637 s, 117 MB/s
# nc 2001:db8::225:8aff:fe5d:53a9 3000 | dd of=/dev/null bs=5M
0+275127 records in
0+275127 records out
629802884 bytes (630 MB) copied, 5.45907 s, 115 MB/s

The CPU was >90% idle using netcat-traditional.
The CPU was >70% idle using netcat-openbsd.

End-to-end throughput

Reading/writing to a disk which uses cryptsetup-luks full-disk encryption with the aes-cbc-essiv:sha256 cipher, these are the resulting speeds:

Reading a file from a CIFS mount works at gigabit throughput, without any tuning:

311+1 records in
311+1 records out
1632440260 bytes (1,6 GB) copied, 13,9396 s, 117 MB/s

Writing works at almost gigabit throughput:

1160+1 records in
1160+1 records out
6082701588 bytes (6,1 GB) copied, 58,0304 s, 105 MB/s

During rsync+ssh backups, the CPU is never 100% maxed out, and data is sent to the NAS at 65 MB/s.

The new software setup

Given that I wanted to use a software setup which has as few non-standard modifications as possible and automatically updates itself, I was curious to see if I could carry this to the extreme by using CoreOS.

If you’re unfamiliar with it, CoreOS is a Linux distribution which is intended to be used in clusters on individual nodes. It updates itself automatically (using omaha, Google’s updater behind ChromeOS) and comes as a largely read-only system without a package manager. You deploy software using Docker and configure the setup using cloud-config.

I have been successfully using CoreOS for a few years in virtual machine setups such as the one for the RobustIRC network.

The cloud-config file I came up with can be found in Appendix A. You can pass it to the CoreOS installer’s -c flag. Personally, I installed CoreOS by booting from a grml live linux USB key, then running the CoreOS installer.

In order to update the cloud-config file after installing CoreOS, you can use the following commands:

midna $ scp cloud-config.storage.yaml core@10.252:/tmp/
storage $ sudo cp /tmp/cloud-config.storage.yaml /var/lib/coreos-install/user_data
storage $ sudo coreos-cloudinit --from-file=/var/lib/coreos-install/user_data

Dockerfiles: rrsync and samba

Since neither rsync nor samba directly provide Docker containers, I had to whip up the following small Dockerfiles which install the latest versions from Debian jessie.

Of course, this means that I need to rebuild these two containers regularly, but I also can easily roll them back in case an update broke, and the rest of the operating system updates independently of these mission-critical pieces.

Eventually, I’m looking to enable auto-build for these Docker containers so that the Docker hub rebuilds the images when necessary, and the updates are picked up either manually when time-critical, or automatically by virtue of CoreOS rebooting to update itself.

FROM debian:jessie
RUN apt-get update \
  && apt-get install -y rsync \
  && gunzip -c /usr/share/doc/rsync/scripts/rrsync.gz > /usr/bin/rrsync \
  && chmod +x /usr/bin/rrsync
ENTRYPOINT ["/usr/bin/rrsync"]
FROM debian:jessie
RUN apt-get update && apt-get install -y samba
ADD smb.conf /etc/samba/smb.conf
EXPOSE 137 138 139 445
CMD ["/usr/sbin/smbd", "-FS"]

Appendix A: cloud-config

#cloud-config

hostname: "storage"

ssh_authorized_keys:
  - ssh-rsa AAAAB… michael@midna

write_files:
  - path: /etc/ssl/certs/r.zekjur.net.crt
    content: |
      -----BEGIN CERTIFICATE-----
      MIIDYjCCAko…
      -----END CERTIFICATE-----
  - path: /var/lib/ip6tables/rules-save
    permissions: 0644
    owner: root:root
    content: |
      # Generated by ip6tables-save v1.4.14 on Fri Aug 26 19:57:51 2016
      *filter
      :INPUT DROP [0:0]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [0:0]
      -A INPUT -p ipv6-icmp -m comment --comment "IPv6 needs ICMPv6 to work" -j ACCEPT
      -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment "Allow packets for outgoing connections" -j ACCEPT
      -A INPUT -s fe80::/10 -d fe80::/10 -m comment --comment "Allow link-local traffic" -j ACCEPT
      -A INPUT -s 2001:db8::/32 -m comment --comment "local traffic" -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "SSH" -j ACCEPT
      COMMIT
      # Completed on Fri Aug 26 19:57:51 2016
  - path: /root/.ssh/authorized_keys
    permissions: 0600
    owner: root:root
    content: |
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/midna:/srv/backup/midna stapelberg/rsync /srv/backup/midna" ssh-rsa AAAAB… root@midna
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/scan2drive:/srv/backup/scan2drive stapelberg/rsync /srv/backup/scan2drive" ssh-rsa AAAAB… root@scan2drive
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/alp.zekjur.net:/srv/backup/alp.zekjur.net stapelberg/rsync /srv/backup/alp.zekjur.net" ssh-rsa AAAAB… root@alp

coreos:
  update:
    reboot-strategy: "reboot"
  locksmith:
    window_start: 01:00 # UTC, i.e. 02:00 CET or 03:00 CEST
    window_length: 2h  # i.e. until 03:00 CET or 04:00 CEST
  units:
    - name: ip6tables-restore.service
      enable: true

    - name: 00-enp2s0.network
      runtime: true
      content: |
        [Match]
        Name=enp2s0

        [Network]
        DNS=10.0.0.1
        Address=10.0.0.252/24
        Gateway=10.0.0.1
        IPv6Token=0:0:0:0:10::252

    - name: systemd-networkd-wait-online.service
      command: start
      drop-ins:
        - name: "10-interface.conf"
          content: |
            [Service]
            ExecStart=
            ExecStart=/usr/lib/systemd/systemd-networkd-wait-online \
	      --interface=enp2s0

    - name: unlock.service
      command: start
      content: |
        [Unit]
        Description=unlock hard drive
        Wants=network.target
        After=systemd-networkd-wait-online.service
        Before=samba.service
        
        [Service]
        Type=oneshot
        RemainAfterExit=yes
        # Wait until the host is actually reachable.
        ExecStart=/bin/sh -c "c=0; while [ $c -lt 5 ]; do \
	    /bin/ping6 -n -c 1 r.zekjur.net && break; \
	    c=$((c+1)); \
	    sleep 1; \
	  done"
        ExecStart=/bin/sh -c "(echo -n my_local_secret && \
	  wget \
	    --retry-connrefused \
	    --ca-directory=/dev/null \
	    --ca-certificate=/etc/ssl/certs/r.zekjur.net.crt \
	    -qO- https://r.zekjur.net/sdb2_crypt) \
	  | /sbin/cryptsetup --key-file=- luksOpen /dev/sdb2 sdb2_crypt"
        ExecStart=/bin/mount /dev/mapper/sdb2_crypt /srv

    - name: samba.service
      command: start
      content: |
        [Unit]
        Description=samba server
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull stapelberg/samba:latest

        # Set up samba users (cannot be done in the (public) Dockerfile
        # because users/passwords are sensitive information).
        ExecStartPre=-/usr/bin/docker kill smb
        ExecStartPre=-/usr/bin/docker rm smb
        ExecStartPre=-/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep stapelberg/samba \
	  adduser --quiet --disabled-password --gecos "" --uid 29901 michael
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared
        ExecStartPre=/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep smb-prepared \
	  /bin/sh -c "echo my_password | tee - | smbpasswd -a -s michael"
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared

        ExecStart=/usr/bin/docker run \
          -p 137:137 \
          -p 138:138 \
          -p 139:139 \
          -p 445:445 \
          --tmpfs=/run \
          -v /srv/data:/srv/data \
          --name smb \
          -t \
          smb-prepared \
            /usr/sbin/smbd -FS

    - name: emby.service
      command: start
      content: |
        [Unit]
        Description=emby
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull emby/embyserver

        ExecStart=/usr/bin/docker run \
          --rm \
          --net=host \
          -v /srv/data/movies:/srv/data/movies:ro \
          -v /srv/data/series:/srv/data/series:ro \
          -v /srv/emby:/config \
          emby/embyserver

by Michael Stapelberg at 21. November 2016 16:12:00

16. November 2016

sur5r/blog

Another instance of AC_DEFINE being undefined

While trying to build a backport if i3 4.13 for Debian wheezy (currently oldstable), I stumbled across the following problem:


   dh_autoreconf -O--parallel -O--builddirectory=build
configure.ac:132: error: possibly undefined macro: AC_DEFINE
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
dh_autoreconf: autoreconf -f -i returned exit code 1


Digging around the net I found nothing that helped. So I tried building that package manually. After some trail and error, I noticed that autoreconf as of wheezy seems to ignore AC_CONFIG_MACRO_DIR. Calling autoreconf with -i -f -I m4 solved this.

I finally added this to debian/rules:

override_dh_autoreconf:
        dh_autoreconf autoreconf -- -f -i -I m4

by sur5r (nospam@example.com) at 16. November 2016 00:49:23

29. September 2016

RaumZeitLabor

††† An alle Cheesusfreaks: Heiliges Gefresse mit Käsus Christus †††

KaesusLiebtDich



Es ist an der Zeit, unserem Freund Käsus Christus zu gedenken!

Wir wollen Halloumi-teinander bei einem Abendmahl das Brot brechen, denn mit Freunden zu essen, das ist gut für die Seele. So Comté alle am 22.10.2016 ins RaumZeitLabor nach Mannheim-Emmental. Wenn die Babybel um 18.30 Uhr erklingt, werden wir die heilige Feta mit einem Havarti Unser beginnen. Zögert nicht und nehmt diese Einladung an: Damit wir wissen, wie viel Wein wir aus Wasser machen sollen, gebt uns ein göttliches Zeichen [†] bis zum 19.10.2016. Wir freuen uns, wenn ein jeder 10 Euro in den Klingelbeutel fallen lässt. Denkt daran: Käsus Christus ist auch für euren Stilton gestorben.

Brie-de sei mit euch. Amen.

[†] Keine Mail, kein Käse!

by flederrattie at 29. September 2016 00:00:00

31. August 2016

Mero’s Blog

I've been diagnosed with ADHD

tl;dr: I've been diagnosed with ADHD. I ramble incoherently for a while and I might do some less rambling posts about it in the future.

As the title says, I've been recently diagnosed with ADHD and I thought I'd try to be as open about it as possible and share my personal experiences with mental illness. That being said, I am also adding the disclaimer, that I have no training or special knowledge about it and that the fact that I have been diagnosed with ADHD does not mean I am an authoritative source on its effects, that this diagnoses is going to stick or that my experiences in any way generalize to other people who got the same diagnosis.

This will hopefully turn into a bit of a series of blog posts and I'd like to start it off with a general description of what lead me to look for a diagnosis and treatment in the first place. Some of the things I am only touching on here I might write more about in the future (see below for a non-exhaustive list). Or not. I have not yet decided :)


It is no secret (it's actually kind of a running gag) that I am a serious procrastinator. I always had trouble starting on something and staying with it; my graveyard of unfinished projects is huge. For most of my life, however, this hasn't been a huge problem to me. I was reasonably successful in compensating for it with a decent amount of intelligence (arrogant as that sounds). I never needed any homework and never needed to learn for tests in school and even at university I only spent very little time on both. The homework we got was short-term enough that procrastination was not a real danger, I finished it quickly and whenever there was a longer-term project to finish (such as a seminar-talk or learning for exams) I could cram for a night and get enough of an understanding of things to do a decent job.

However, that strategy did not work for either my bachelor, nor my master thesis, which predictably lead to both turning out a lot worse than I would've wished for (I am not going to go into too much detail here). Self-organized long-term work seemed next to impossible. This problem got much worse when I started working full-time. Now almost all my work was self-organized and long-term. Goals are set on a quarterly basis, the decision when and how and how much to work is completely up to you. Other employers might frown at their employees slacking off at work; where I work, it's almost expected. I was good at being oncall, which is mainly reactive, short-term problem solving. But I was (and am) completely dissatisfied with my project work. I felt that I did not get nearly as much done as I should or as I would want. My projects in my first quarter had very clear deadlines and I finished on time (I still procrastinated, but at least at some point I sat down until I got it done. It still meant staying at work until 2am the day before the deadline) but after that it went downhill fast, with projects that needed to be done ASAP, but not with a deadline. So I started slipping. I didn't finish my projects (in fact, the ones that I didn't drop I am still not done with), I spent weeks doing effectively nothing (I am not exaggerating here. I spent whole weeks not writing a single line of code, closing a single bug or running a single command, doing nothing but browse reddit, twitter and wasting my time in similar ways. Yes, you can waste a week doing nothing, while sitting at your desk), not being able to get myself to start working on anything and hating myself for it.

And while I am mostly talking about work, this affected my personal life too. Mail remains unopened, important errands don't get done, I am having trouble keeping in contact with friends, because I can always write or visit them some other time…

I tried (and had tried over the past years) several systems to organize myself better, to motivate myself and to remove distractions. I was initially determined to try to solve my problems on my own, that I did not really need professional help. However, at some point, I realized that I won't be able to fix this just by willing myself to it. I realized it in the final months of my master thesis, but convinced myself that I don't have time to fix it properly, after all, I have to write my thesis. I then kind of forgot about it (or rather: I procrastinated it) in the beginning of starting work, because things where going reasonably well. But it came back to me around the start of this year. After not finishing any of my projects in the first quarter. And after telling my coworkers and friends of my problems and them telling me that it's just impostor syndrome and a distorted view of myself (I'll go into why they where wrong some more later, possibly).

I couldn't help myself and I couldn't get effective help from my coworkers. So, in April, I finally decided to see a Psychologist. Previously the fear of the potential cost (or the stress of dealing with how that works with my insurance), the perceived complexity of finding one that is both accepting patients that are only publicly insured and is specialized on my particular kinds of issues and the perceived lack of time prevented me from doing so. Apart from a general doubt about its effectiveness and fear of the implications for my self-perception and world-view, of course.

Luckily one of the employee-benefits at my company is free and uncomplicated access to a Mental Health (or "Emotional well-being", what a fun euphemism) provider. It only took a single E-Mail and the meetings happen around 10 meters away from my desk. So I started seeing a psychologist on a regular basis (on average probably every one and a half weeks or so) and talking about my issues. I explained and described my problems and it went about as good as I feared; they tried to convince me that the real issue isn't my performance, but my perception of it (and I concede that I still have trouble coming up with hard, empirical evidence to present to people. Though it's performance review season right now. As I haven't done anything of substance in the past 6 months, maybe I will finally get that evidence…) and they tried to get me to adopt more systems to organize myself and remove distractions. All the while, I got worse and worse. My inability to get even the most basic things done or to concentrate even for an hour, even for five minutes, on anything of substance, combined with the inherent social isolation of moving to a new city and country, lead me into deep depressive episodes.

Finally, when my Psychologist in a session tried to get me to write down what was essentially an Unschedule (a system I knew about from reading "The Now Habit" myself when working at my bachelor thesis and that I even had moderate success with; for about two weeks), I broke down. I told them that I do not consider this a valuable use of these sessions, that I tried systems before, that I tried this particular system before and that I can find these kind of lifestyle advise on my own, in my free time. That the reason I was coming to these sessions was to get systematic, medical, professional help of the kind that I can't get from books. So we agreed, at that point, to pursue a diagnosis, as a precondition for treatment.

Which, basically, is where we are at now. The diagnostic process consisted of several sessions of questions about my symptoms, my childhood and my life in general, of filling out a couple of diagnostic surveys and having my siblings fill them out too (in the hope that they can fill in some of the blanks in my own memory about my childhood) and of several sessions of answering more questions from more surveys. And two weeks ago, I officially got the diagnosis ADHD. And the plan to attack it by a combination of therapy and medication (the latter, in particular, is really hard to get from books, for some reason :) ).

I just finished my first day on Methylphenidate (the active substance in Ritalin), specifically Concerta. And though this is definitely much too early to actually make definitive judgments on its effects and effectiveness, at least for this one day I was feeling really great and actively happy. Which, coincidentally, helped me to finally start on this post, to talk about mental health issues; a topic I've been wanting to talk about ever since I started this blog (again), but so far didn't really felt I could.


As I said, this is hopefully the first post in a small ongoing series. I am aware that it is long, rambling and probably contentious. It definitely won't get all my points across and will change the perception some people have of me (I can hear you thinking how all of this doesn't really speak "mental illness", how it seems implausible that someone with my CV would actually, objectively, get nothing done and how I am a drama queen and shouldn't try to solve my problems with dangerous medication). It's an intentionally broad overview of my process and its main purpose is to "out" myself publicly and create starting points for multiple, more specific, concise, interesting and convincing posts in the future. Things I might, or might not talk about are

  • My specific symptoms and how this has and still is influencing my life (and how, yes, this is actual an illness, not just a continuous label). In particular, there are things I wasn't associating with ADHD, which turn out to be relatively tightly linked.
  • How my medication is specifically affecting me and what it does to those symptoms. I can not overstate how fascinated I am with today's experience. I was wearing a visibly puzzled expression all day because I couldn't figure out what was happening. And then I couldn't stop smiling. :)
  • Possibly things about my therapy? I really don't know what to expect about that, though. Therapy is kind off the long play, so it's much harder to evaluate and talk about its effectiveness.
  • Why I consider us currently to be in the Middle Ages of mental health and why I think that in a hundred years or so people will laugh at how we currently deal with it. And possibly be horrified.
  • My over ten years (I'm still young, mkey‽) of thinking about my own mental health and mental health in general and my thoughts of how mental illness interacts with identity and self-definition.
  • How much I loathe the term "impostor syndrome" and why I am (still) convinced that I don't get enough done, even though I can't produce empirical evidence for that and people try to convince me otherwise. And what it does to you, to need to take the "I suck" side of an argument and still don't have people believe you.

Let me know, what you think :)

31. August 2016 02:22:38

15. August 2016

sur5r/blog

Calculating NXP LPC43xx checksums using srecord 1.58

As eveybody on the internet seems to be relying on either Keil on Windows or precompiled binaries from unknown sources to generate their checksums I investigated a bit...

Assuming you ran something like this to generate your binary image:

arm-none-eabi-objcopy -v -O binary firmware.elf firmware.bin

The resulting firmware.bin is still lacking the checksum which LPC chips use to check the validity of the user code.

The following command will create a file firmware_out.bin with the checksum applied:

srec_cat firmware.bin -binary -crop 0x0 0x1c -le-checksum-negative 0x1C 4 4 firmware.bin -binary -crop 0x20 -o firmware_out.bin -binary

by sur5r (nospam@example.com) at 15. August 2016 15:05:54

19. July 2016

RaumZeitLabor

MRMCD 2016: Vorverkauf läuft

Vorr. T-Shirt-Motiv

English version below

Seit einigen Wochen läuft der Vorverkauf zu den MRMCD 2016. Tickets und T-Shirts können noch bis Ende Juli unter presale.mrmcd.net bestellt werden.

Die Teilnahme am Vorverkauf erleichtert unsere Arbeit sehr, da er uns eine bessere Planung ermöglicht und die finanziellen Mittel verschafft, die wir vor der Konferenz schon brauchen. Es wird eine Abendkasse geben, an der allerdings keine T-Shirts und nur begrenzt Goodies erhältlich sind.

Wir sind unter klinikleitung@mrmcd.net für alle Fragen erreichbar.

Die MRMCD (MetaRheinMainChaosDays) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des CCC mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre. Das diesjährige Motto “diagnose: kritisch” setzt einen Themenschwerpunkt auf IT und Innovation rund um Medizin und Gesundheit.

Wir freuen uns bis zum 25.07. auch noch über zahlreiche Vortragseinreichungen im frab. Weitere Informationen gibt es auf unserer Website mrmcd.net.

The presale of this year’s MRMCD tickets has started a few weeks ago, you can buy your tickets and t-shirts at presale.mrmcd.net. The presale runs until July 25th.

With buying your tickets in advance, you make organizing this conference a lot easier for us. It enables us to properly plan the event and gives us the money we need to have in advance to buy all the things a conference needs. There will be a ticket sale on-site, but no t-shirts and no guaranteed goodies.

If you have any questions, please contact us at klinikleitung@mrmcd.net.

The MRMCD (MetaRheinMainChaosDays) are an annual IT conference of the CCC with a slight focus on IT security. MRMCD have been taking place in the Rhine-Main area for over 10 years. Ever since 2012 we cooperate with the the faculty of Computer Science of the University of Applied Sciences Darmstadt (h_da). Apart from the conference program, the MRMCD provide the opportunity of exchanges with IT experts in a relaxed atmosphere. This year’s motto “diagnosis: critical” sets a special focus on IT and innovation in the medical and health field.

We are still accepting talk submissions until the 25th of July and we look forward to your submission at frab.cccv.de. You can find further information on our website

by Alexander at 19. July 2016 00:00:00

20. June 2016

Moredreads Blog

16. June 2016

sECuREs Webseite

Conditionally tunneling SSH connections

Whereas most of the networks I regularly use (home, work, hackerspace, events, …) provide native IPv6 connectivity, sometimes I’m in a legacy-only network, e.g. when tethering via my phone on some mobile providers.

By far the most common IPv6-only service I use these days is SSH to my computer(s) at home. On philosophical grounds, I refuse to set up a dynamic DNS record and port-forwardings, so the alternative I use is either Miredo or tunneling through a dual-stacked machine. For the latter, I used to use the following SSH config:

Host home
        Hostname home.zekjur.net
        ProxyCommand ssh -4 dualstack -W %h:%p

The issue with that setup is that it’s inefficient when I’m in a network which does support IPv6, and it requires me to authenticate to both machines. These are not huge issues of course, but they bother me enough that I’ve gotten into the habit of commenting out/in the ProxyCommand directive regularly.

I’ve discovered that SSH can be told to use a ProxyCommand only when you don’t have a route to the public IPv6 internet, though:

Match host home exec "ip -6 route get 2001:7fd::1 | grep -q unreachable"
        ProxyCommand ssh -4 dualstack -W %h:%p

Host home
        Hostname home.zekjur.net

The IPv6 address used is from k.root-servers.net, but no packets are being sent — we merely ask the kernel for a route. When you don’t have an IPv6 address or default route, the ip command will print unreachable, which enables the ProxyCommand.

For debugging/verifying the setup works as expected, use ssh -vvv and look for lines prefixed with “Debug3”.

by Michael Stapelberg at 16. June 2016 17:20:00

02. June 2016

RaumZeitLabor

Exkursion in das Aufzugsmuseum in Seckenheim

Endlich ist es mal wieder soweit: Ausflugszeit! Oder vielmehr: Aufzugszeit! Gemeinsam werden wir das Aufzugsmuseum in Seckenheim besichtigen.

Der Wasserturm Seckenheim wurde von der Firma Lochbühler zu einem Museum umgebaut, in dem funktionsfähige Aufzüge, Komponenten und ihre Technik ab Ende des 19. Jahrhunderts zu besichtigen sind. Neben den »normalen« Aufzügen gibt es im Museum auch einen Paternoster zu besichtigen und - wenn wir lieb sind - vielleicht auch zu befahren.

Das ganze findet am Freitag, den 24. Juni 2016, statt. Wir treffen uns um 18.15 Uhr vor dem Eingang an der Kloppenheimer Straße 94 in Mannheim-Seckenheim. Die Führung beginnt dann um 18.30 Uhr und dauert voraussichtlich anderthalb Stunden.

Die maximale Teilnehmerzahl ist mit 20 Personen beschränkt, daher ist eine vorherige Anmeldung erforderlich. Um euch anzumelden, schreibt ihr mir einfach eine Mail mit dem Betreff “Exkursion Aufzugsmuseum” und der Anzahl gewünschter Plätze. Die Anmeldungen müssen bis zum 22. Juni 2016, 13:37 Uhr bei mir eingegangen sein.

Weitere Informationen findet ihr zum Beispiel auf der Seite der Rhein-Neckar-Industriekultur.

by flederrattie at 02. June 2016 00:00:00

01. June 2016

Moredreads Blog

26. May 2016

mfhs Blog

Introduction to awk programming block course

Last year I taught a course about bash scripting, during which I briefly touched on the scripting language awk. Some of the attending people wanted to hear more about this, so I was asked by my graduate school to prepare a short block course on awk programming for this year. The course will be running from 15th till 17th August 2016 and registration is now open. You can find an outline and further information on the "Introduction to awk" course page. If you cannot make the course in person, do not worry: All course material will be published both on the course website and on github afterwards.

by mfh at 26. May 2016 00:00:19

10. May 2016

sECuREs Webseite

Supermicro X11SSZ-QF workstation mainboard

Context

For the last 3 years I’ve used the hardware described in my 2012 article. In order to drive a hi-dpi display, I needed to install an nVidia graphics card, since only the nVidia hardware/software supported multi-tile displays requiring MST (Multiple Stream Transport) such as the Dell UP2414Q. While I’ve switched to a Viewsonic VX2475Smhl-4K in the meantime, I still needed a recent-enough DisplayPort output that could deliver 3840x2160@60Hz. This is not the case for the Intel Core i7-2600K’s integrated GPU, so I needed to stick with the nVidia card.

I then stumbled over a video file which, when played back using the nouveau driver’s VDPAU functionality, would lock up my graphics card entirely, so that only a cold reboot helped. This got me annoyed enough to upgrade my hardware.

Why the Supermicro X11SSZ-QF?

Intel, my standard pick for mainboards with good Linux support, unfortunately stopped producing desktop mainboards. I looked around a bit for Skylake mainboards and realized that the Intel Q170 Express chipset actually supports 2 DisplayPort outputs that each support 3840x2160@60Hz, enabling a multi-monitor hi-dpi display setup. While I don’t currently have multiple monitors and don’t intend to get another monitor in the near future, I thought it’d be nice to have that as a possibility.

Turns out that there are only two mainboards out there which use the Q170 Express chipset and actually expose two DisplayPort outputs: the Fujitsu D3402-B, and the Supermicro X11SSZ-QF. The Fujitsu one doesn’t have an integrated S/PDIF output, which I need to play audio on my Denon AVR-X1100W without a constant noise level. Also, I wasn’t able to find software downloads or even a manual for the board on the Fujitsu website. For Supermicro, you can find the manual and software very easily on their website, and because I bought Supermicro hardware in the past and was rather happy with it, I decided to go with the Supermicro option.

I’ve been using the board for half a year now, without any stability issues.

Mechanics and accessories

The X11SSZ-QF ships with a printed quick reference sheet, an I/O shield and 4 SATA cables. Unfortunately, Supermicro apparently went for the cheapest SATA cables they could find, as they do not have a clip to ensure they don’t slide off of the hard disk connector. This is rather disappointing for a mainboard that costs more than 300 CHF. Further, an S/PDIF bracket is not included, so I needed to order one from the USA.

The I/O shield comes with covers over each port, which I assume is because the X11SSZ mainboard family has different ports (one model has more ethernet ports, for example). When removing the covers, push them through from the rear side of the case (if you had installed it already). If you do it from the other side, a bit of metal will remain in each port.

Due to the positioning of the CPU socket, with my Fractal Design Define R3 case, one cannot reach the back of the CPU fan bracket when the mainboard is installed in the case. Hence, you need to first install the CPU fan, then install the mainboard. This is doable, you just need to realize it early enough and think about it, otherwise you’ll install the mainboard twice.

Integrated GPU not initialized

The integrated GPU is not initialized by default. You need to either install an external graphics card or use IPMI to enter the BIOS and change Advanced → Chipset Configuration → Graphics Configuration → Primary Display to “IGFX”.

For using IPMI, you need to connect the ethernet port IPMI_LAN (top right on the back panel, see the X11SSZ-QF quick reference guide) to a network which has a DHCP server, then connect to the IPMI’s IP address in a browser.

Overeager Fan Control

When I first powered up the mainboard, I was rather confused by the behavior: I got no picture (see above), but LED2 was blinking, meaning “PWR Fail or Fan Fail”. In addition, the computer seemed to turn itself off and on in a loop. After a while, I realized that it’s just the fan control which thinks my slow-spinning Scythe Mugen 3 Rev. B CPU fan is broken because of its low RPM value. The fan control subsequently spins up the fan to maximum speed, realizes the CPU is cool enough, spins down the fan, realizes the fan speed is too low, spins up the fan, etc.

Neither in the BIOS nor in the IPMI web interface did I find any options for the fan thresholds. Luckily, you can actually introspect and configure them using IPMI:

# apt-get install freeipmi-tools
# ipmi-sensors-config --filename=ipmi-sensors.config --checkout

In the human-readable text file ipmi-sensors.config you can now introspect the current configuration. You can see that FAN1 and FAN2 have sections in that file:

Section 607_FAN1
 Enable_All_Event_Messages Yes
 Enable_Scanning_On_This_Sensor Yes
 Enable_Assertion_Event_Lower_Critical_Going_Low Yes
 Enable_Assertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Assertion_Event_Upper_Critical_Going_High Yes
 Enable_Assertion_Event_Upper_Non_Recoverable_Going_High Yes
 Enable_Deassertion_Event_Lower_Critical_Going_Low Yes
 Enable_Deassertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Deassertion_Event_Upper_Critical_Going_High Yes
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
 Lower_Non_Critical_Threshold 700.000000
 Lower_Critical_Threshold 500.000000
 Lower_Non_Recoverable_Threshold 300.000000
 Upper_Non_Critical_Threshold 25300.000000
 Upper_Critical_Threshold 25400.000000
 Upper_Non_Recoverable_Threshold 25500.000000
 Positive_Going_Threshold_Hysteresis 100.000000
 Negative_Going_Threshold_Hysteresis 100.000000
EndSection

When running ipmi-sensors, you can see the current temperatures, voltages and fan readings. In my case, the fan spins with 700 RPM during normal operation, which was exactly the Lower_Non_Critical_Threshold in the default IPMI config. Hence, I modified my config file as illustrated by the following diff:

--- ipmi-sensors.config	2015-11-13 11:53:00.940595043 +0100
+++ ipmi-sensors-fixed.config	2015-11-13 11:54:49.955641295 +0100
@@ -206,11 +206,11 @@
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
- Lower_Non_Critical_Threshold 700.000000
+ Lower_Non_Critical_Threshold 400.000000
- Lower_Critical_Threshold 500.000000
+ Lower_Critical_Threshold 200.000000
- Lower_Non_Recoverable_Threshold 300.000000
+ Lower_Non_Recoverable_Threshold 0.000000
 Upper_Non_Critical_Threshold 25300.000000

You can install the new configuration using the --commit flag:

# ipmi-sensors-config --filename=ipmi-sensors-fixed.config --commit

You might need to shut down your computer and disconnect power for this change to take effect, since the BMC is running even when the mainboard is powered off.

S/PDIF output

The S/PDIF pin header on the mainboard just doesn’t work at all. It does not work in Windows 7 (for which the board was made), and it doesn’t work in Linux. Neither the digital nor the analog part of an S/PDIF port works. When introspecting the Intel HDA setup of the board, the S/PDIF output is not even hooked up correctly. Even after fixing that, it doesn’t work.

Of course, I’ve contacted the Supermicro support. After making clear to them what my use-case is, they ordered (!) an S/PDIF header and tested the analog part of it. Their technical support claims that their port is working, but they never replied to my question with which operating system they tested that, despite me asking multiple times.

It’s pretty disappointing to see that the support is unable to help here or at least confirm that it’s broken.

To address the issue, I’ve bought an ASUS Xonar DX sound card. It works out of the box on Linux, and its S/PDIF port works. The S/PDIF port is shared with the Line-in/Mic-in jack, but a suitable adapter is shipped with the card.

Wake-on-LAN

I haven’t gotten around to using Wake-on-LAN or Suspend-to-RAM yet. I will amend this section when I get around to it.

Conclusion

It’s clear that this mainboard is not for consumers. This begins with the awkward graphics and fan control setup and culminates in the apparently entirely untested S/PDIF output.

That said, once you get it working, it works reliably, and it seems like the only reasonable option with two onboard DisplayPort outputs.

by Michael Stapelberg at 10. May 2016 18:46:00

19. April 2016

RaumZeitLabor

MRMCD 2016: CFP

MRMCD Logo 2016

tl;dr: Die MRMCD 2016 finden vom 02. bis 04. September in Darmstadt statt.

Die MRMCD-Klinik Darmstadt ist eine IT-Security-Klinik der Maximalversorgung. Wir sind auf der Suche nach praktizierenden Teilnehmern aus allen Bereichen der IT und Hackerkultur und würden uns freuen, wenn Sie einen Teil zu unserem Behandlungsprogramm beitragen können. Die Schwerpunkte unserer Arbeit liegen unter anderem auf der Netzwerkchirurgie und Kryptographie, der plastischen und rekonstruktiven Open-Source-Entwicklung aber auch dem Einsatz von Robotern in der Medizin. Eingebunden ist eines der größten und modernsten Zentren in Deutschland für die Behandlung schwerer und schwerster Medical Device Hacks sowie eine Klinik bei Verletzungen der Hackethik. Somit sind die MRMCD2016 das ideale Umfeld für Ihre aktuellen Forschungsprojekte. Unser fachkundiges Publikum interessiert sich für innovative Therapieansätze aus allen Bereichen der IT. Eine Erstdiagnose der Konferenzthemen aus den vergangenen Jahren ist im Klinikarchiv verfügbar.

Sie sind bereit für neue Herausforderungen?

Reichen Sie Ihre aussagekräftige Bewerbung bis zum 25.07.2016 in unserem Bewerberportal ein. Auch wenn Ihre gewünschte Fachdisziplin nicht zu unseren Schwerpunkten gehören sollte, freuen wir uns selbstverständlich auf eine Initiativbewerbung. Allen neuen Mitarbeiterinnen und Mitarbeitern wird die Stellenvergabe am 01.08.2016 bekannt gegeben. Sofern Ihre Bewerbung im Rahmen unseres CFPs Berücksichtigung findet, können Sie sich schon jetzt auf unsere bekannte Rundumversorgung in der exklusiven Chefarztlounge freuen.

Fragen zu den Bewerbungen und zur MRMCD-Klinik beantwortet das Personal per E-Mail.

Die MetaRheinMainChaosDays (MRMCD) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des Chaos Computer Clubs (CCC) mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre.

Die diesjährigen MRMCD finden vom 2.-4. September statt. Weitere Infos finden sich auf 2016.mrmcd.net.

by Alexander at 19. April 2016 00:00:00

14. March 2016

mfhs Blog

[c¼h] Testen mit Rapidcheck und Catch

Last Thursday, I once gave another short talk at the Heidelberg Chaostreff NoName e.V. This time I talked about writing tests in C++ using the testing libraries rapidcheck and Catch.

In the talk I presented some ideas how to incorporate property-based testing into test suites for C++ programs. The idea of property-based testing originates from the Haskell library QuickCheck, which tries to use properties of the input data and of properties of the output data in order to generate random test cases for a code unit. So instead of testing a piece of code with just a small number of static tests, which are the same for each run of the test suite, we test with randomly seeded data. Additionally, if a test fails, QuickCheck/rapidcheck automatically simplifies the test case in order to find the simplest input, which yields a failing test. This of cause eases finding the underlying bug massively.

Since this was our first Treff in the new Mathematikon building, the University just opened recently, we had a few technical difficulties with our setup. As a result there is no recording of my talk available this time, unfortunately. The example code I used during the presentation, however, is available on github. It contains a couple of buggy classes and functions and a Catch based test program, which can be used to find these bugs.

Link Licence
c14h-rapidcheck-catch repository GNU GPL v3

by mfh at 14. March 2016 00:00:25

06. March 2016

sECuREs Webseite

Docker on Travis for new tools and fast runs

Like many other open source projects, the i3 window manager is using Travis CI for continuous integration (CI). In our specific case, we not only verify that every pull request compiles and the test suite still passes, but we also ensure the code is auto-formatted using clang-format, does not contain detectable spelling errors and does not accidentally use C functions like sprintf() without error checking.

By offering their CI service for free, Travis provides a great service to the open source community, and I’m very thankful for that. Automatically running the test suite for contributions and displaying the results alongside the pull request is a feature that I’ve long wanted, but would have never gotten around to implementing in the home-grown code review system we used before moving to GitHub.

Motivation (more recent build environment)

Nothing is perfect, though, and some aspects of Travis can make it hard to work with. In particular, the build environment they provide is rather old: at the time of writing, the latest you can get is Ubuntu Trusty, which was released almost two years ago. I realize that Ubuntu Trusty is the current Ubuntu Long-Term Support release, but we want to move a bit quicker than being able to depend on new packages roughly once every two years.

For quite a while, we had to make do with that old environment. As a mitigation, in our .travis.yml file, we added the whitelisted ubuntu-toolchain-r-test source for newer versions of clang (notably also clang-format) and GCC. For integrating lintian’s spell checking into our CI infrastructure, we needed a newer lintian version, as the version in Ubuntu Trusty doesn’t have an interface for external scripts to use. Trying to make our .travis.yml file install a newer version of lintian (and only lintian!) was really challenging. To get a rough idea, take a look at our .travis.yml before we upgraded to Ubuntu Trusty and were stuck with Ubuntu Precise. Cherry-picking a newer lintian version into Trusty would have been even more complicated.

With Travis starting to offer Docker in their build environment, and by looking at Docker’s contribution process, which also makes heavy use of containers, we were able to put together a better solution:

Implementation

The basic idea is to build a Docker container based on Debian testing and then run all build/test commands inside that container. Our Dockerfile installs compilers, formatters and other development tools first, then installs all build dependencies for i3 based on the debian/control file, so that we don’t need to duplicate build dependencies for Travis and for Debian.

This solves the immediate issue nicely, but comes at a significant cost: building a Docker container adds quite a bit of wall clock time to a Travis run, and we want to give our contributors quick feedback. The solution to long build times is caching: we can simply upload the Docker container to the Docker Hub and make subsequent builds use the cached version.

We decided to cache the container for a month, or until inputs to the build environment (currently the Dockerfile and debian/control) change. Technically, this is implemented by a little shell script called ha.sh (get it? hash!) which prints the SHA-256 hash of the input files. This hash, appended to the current month, is what we use as tag for the Docker container, e.g. 2016-03-3d453fe1.

See our .travis.yml for how to plug it all together.

Conclusion

We’ve been successfully using this setup for a bit over a month now. The advantages over pure Travis are:

  1. Our build environment is more recent, so we do not depend on Travis when we want to adopt tools that are only present in more recent versions of Linux.
  2. CI runs are faster: what used to take about 5 minutes now takes only 1-2 minutes.
  3. As a nice side effect, contributors can now easily run the tests in the same environment that we use on Travis.

There is some potential for even quicker CI runs: currently, all the different steps are run in sequence, but some of them could run in parallel. Unfortunately, Travis currently doesn’t provide a nice way to specify the dependency graph or to expose the different parts of a CI run in the pull request itself.

by Michael Stapelberg at 06. March 2016 19:00:00

01. January 2016

sECuREs Webseite

Prometheus: Using the blackbox exporter

Up until recently, I used to use kanla, a simple alerting program that I wrote 4 years ago. Back then, delivering alerts via XMPP (Jabber) to mobile devices like Android smartphones seemed like the best course of action.

About a year ago, I’ve started using Prometheus for collecting monitoring data and alerting based on that data. See „Monitoring mit Prometheus“, my presentation about the topic at GPN, for more details and my experiences.

Motivation to switch to the Blackbox Exporter

Given that the Prometheus Alertmanager is already configured to deliver alerts to my mobile device, it seemed silly to rely on two entirely different mechanisms. Personally, I’m using Pushover, but Alertmanager integrates with many popular providers, and it’s easy to add another one.

Originally, I considered extending kanla in such a way that it would talk to Alertmanager, but then I realized that the Prometheus Blackbox Exporter is actually a better fit: it’s under active development and any features that are added to it benefit a larger number of people than the small handful of kanla users.

Hence, I switched from having kanla probe my services to having the Blackbox Exporter probe my services. The rest of the article outlines my configuration in the hope that it’s useful for others who are in a similar situation.

I’m assuming that you are already somewhat familiar with Prometheus and just aren’t using the Blackbox Exporter yet.

Blackbox Exporter: HTTP

The first service I wanted to probe is Debian Code Search. The following blackbox.yml configuration file defines a module called “dcs_results” which, when called, downloads the specified URL via a HTTP GET request. The probe is considered failed when the download doesn’t finish within the timeout of 5 seconds, or when the resulting HTTP body does not contain the text “load_font”.

modules:
  dcs_results:
    prober: http
    timeout: 5s
    http:
      fail_if_not_matches_regexp:
      - "load_font"

In my prometheus.conf, this is how I invoke the probe:

- job_name: blackbox_dcs_results
  scrape_interval: 60s
  metrics_path: /probe
  params:
    module: [dcs_results]
    target: ['http://codesearch.debian.net/search?q=i3Font']
  scheme: http
  target_groups:
  - targets:
    - blackbox-exporter:9115

As you can see, the search query is “i3Font”, and I know that “load_font” is one of the results. In case Debian Code Search does not deliver the expected search results, I know something is seriously broken. To make Prometheus actually generate an alert when this probe fails, we need an alert definition like the following:

ALERT ProbeFailing
  IF probe_success < 1
  FOR 15m
  WITH {
    job="blackbox_exporter"
  }
  SUMMARY "probe {{$labels.job}} failing"
  DESCRIPTION "probe {{$labels.job}} failing"

Blackbox Exporter: IRC

With the TCP probe module’s query/response feature, we can configure a module that verifies an IRC server lets us log in:

modules:
  irc_banner:
    prober: tcp
    timeout: 5s
    tcp:
      query_response:
      - send: "NICK prober"
      - send: "USER prober prober prober :prober"
      - expect: "PING :([^ ]+)"
        send: "PONG ${1}"
      - expect: "^:[^ ]+ 001"

Blackbox Exporter: Git

The query/response feature can also be used for slightly more complex protocols. To verify a Git server is available, we can use the following configuration:

modules:
  git_code_i3wm_org:
    prober: tcp
    timeout: 5s
    tcp:
      query_response:
      - send: "002bgit-upload-pack /i3\x00host=code.i3wm.org\x00"
      - expect: "^[0-9a-f]+ HEAD\x00"

Note that the first characters are the ASCII-encoded hex length of the entire line:

$ echo -en '0000git-upload-pack /i3\x00host=code.i3wm.org\x00' | wc -c
43
$ perl -E 'say sprintf("%04x", 43)'
002b

The corresponding git URL for the example above is git://code.i3wm.org/i3. You can read more about the git protocol at Documentation/technical/pack-protocol.txt.

Blackbox Exporter: Meta-monitoring

Don’t forget to add an alert that will fire if the blackbox exporter is not available:

ALERT BlackboxExporterDown
  IF count(up{job="blackbox_dcs_results"} == 1) < 1
  FOR 15m
  WITH {
    job="blackbox_meta"
  }
  SUMMARY "blackbox-exporter is not up"
  DESCRIPTION "blackbox-exporter is not up"

by Michael Stapelberg at 01. January 2016 19:00:00

20. December 2015

sECuREs Webseite

(Not?) Hosting small projects on Container Engine

Note: the postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.

Background

For the last couple of years, faq.i3wm.org was running on a dedicated server I rented. I partitioned that server into multiple virtual machines using KVM, and one of these VMs contained the faq.i3wm.org installation. In that VM, I directly used pip to install the django-based askbot, a stack overflow-like questions & answers web application.

Every upgrade of askbot brought with it at least some small annoyances. For example, with the django 1.8 release, one had to change the cache settings (I would have expected compatibility, or at least a suggested/automated config file update). A new release of a library dependency broke askbot installation. The askbot 0.9.0 release was not installable on Debian-based systems. In conclusion, for every upgrade you’d need to plan a couple of hours for identifying and possibly fixing these numerous small issues.

Once Docker was released, I started using askbot in Docker containers. I’ll talk a bit more about the advantages in the next section.

With the software upgrade burden largely mitigated by using Docker, I’ve had some time to consider bigger issues, namely disaster recovery and failure tolerance. The story for disaster recovery up to that point was daily off-site backups of the underlying PostgreSQL database. Because askbot was packaged in a Docker container, it became feasible to quickly get exactly the same version back up and running on a new server. But what about failure tolerance? If the server which runs the askbot Docker container suddenly dies, I would need to manually bring up the replacement instance from the most recent backup, and in the timespan between the hardware failure and my intervention, the FAQ would be unreachable.

The desire to make hardware failures a non-event for our users lead me to evaluate Google Container Engine (abbreviated GKE) for hosting faq.i3wm.org. The rest of the article walks through the motivation behind each layer of technology that’s used when hosting on GKE, how exactly one would go about it, how much one needs to pay for such a setup and concludes with my overall thoughts on the experience.

Motivation behind each layer

Google Container Engine is a hosted version of Kubernetes, which in turn schedules Docker containers on servers and ensures that they are staying up. As an example, you can express “I always want to have one instance of the prosody/prosody Docker container running, with the Persistent Disk volume prosody-disk mounted at /var/lib/prosody” (prosody is an XMPP server). Or, you could make it 50 instances, just by changing a single number in your configuration file.

So, let’s dissect the various layers of technology that we’re using when we run containers on GKE and see what each of them provides, from the lowest layer upwards.

Docker

Docker combines two powerful aspects:

  1. Docker allows us to package applications with a common interface. No matter which application I want to run on my server, all I need is to tell Docker the container name (e.g. prom/prometheus) and then configure a subset of volumes, ports, environment variables and links between the different containers.
  2. Docker containers are self-contained and can (I think they should!) be treated as immutable snapshots of an application.

This results in a couple of nice properties:

  • Moving applications between servers: Much like live-migration of Virtual Machines, it becomes really easy to move an application from one server to another. This covers both: regular server migrations and emergency procedures.
  • Being able to easily test a new version and revert, if necessary: Much like filesystem snapshots, you can easily switch back and forth between different versions of the same software, just by telling Docker to start e.g. prom/node-exporter:0.10.0 instead of prom/node-exporter:0.9.0. Notably, if you treat containers themselves as read-only and use volumes for storage, you might be able to revert to an older version without having to throw away the data that you have accumulated since you upgraded (provided there were no breaking changes in the data structure).
  • Upstream can provide official Docker images instead of relying on Linux distributions to package their software. Notably, this does away with the notion that Linux distributions provide value by integrating applications into their own configuration system or structure. Instead, software distribution gets unified across Linux distributions. This property also pushes out the responsibility for security updates from the Linux distribution to the application provider, which might be good or bad, depending on the specific situation and point of view.

Kubernetes

Kubernetes is the layer which makes multiple servers behave like a single, big server. It abstracts individual servers away:

  • Machine failures are no longer a problem: When a server becomes unavailable for whichever reason, the containers which were running on it will be brought up on a different server. Note that this implies some sort of machine-independent storage solution, like Persistent Disk, and also multiple failure domains (e.g. multiple servers) to begin with.
  • Updates of the underlying servers get easier, because Kubernetes takes care of re-scheduling the containers elsewhere.
  • Scaling out a service becomes easier: you adjust the number of replicas, and Kubernetes takes care of bringing up that number of Docker containers.
  • Configuration gets a bit easier: Kubernetes has a declarative configuration language where you express your intent, and Kubernetes will make it happen. In comparison to running Docker containers with “docker run” from a systemd service file, this is an improvement because the number of edge-cases in reliably running a Docker container is fairly high.

Google Container Engine

While one could rent some dedicated servers and run Kubernetes on them, Google Container Engine offers that as a service. GKE offers some nice improvements over a self-hosted Kubernetes setup:

  • It’s a hosted environment, i.e. you don’t need to do updates and testing yourself, and you can escalate any problems.
  • Persistent Disk: your data will be stored on a distributed file system and you will not have to deal with dying disks yourself.
  • Persistent Disk snapshots are globally replicated (!), providing an additional level of data protection in case there is a catastrophical failure in an entire datacenter or region.
  • Logs are centrally collected, so you don’t need to set up and maintain your own central syslog installation.
  • Automatic live-migration: The underlying VMs (on Google Compute Engine) are automatically live-migrated before planned maintenance, so you should not see downtime unless unexpected events occur. This is not yet state of the art at every hosting provider, which is why I mention it.

The common theme in all of these properties is that while you could do each of these yourself, it would be very expensive both in terms of actual money for the hardware and underlying services, but also in your time. When using Kubernetes as a small-scale user, I think going with a hosted service such as GKE makes a lot of sense.

Getting askbot up and running

Note that I expect you to have skimmed over the official Container Engine documentation, which also provides walkthroughs of how to set up e.g. WordPress.

I’m going to illustrate running a Docker container by just demonstrating the nginx-related part. It covers the most interesting aspects of how Kubernetes works, and packaging askbot is out of scope for this article. Suffice it to say that you’ll need containers for nginx, askbot, memcached and postgres.

Let’s start with the Replication Controller for nginx, which is a logical unit that creates new Pods (Docker containers) whenever necessary. For example, if the server which holds the nginx Pod goes down, the Replication Controller will create a new one on a different server. I defined the Replication Controller in nginx-rc.yaml:

# vim:ts=2:sw=2:et
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
  labels:
    env: prod
spec:
  replicas: 1
  selector:
    app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      containers:
      - name: nginx
        # Use nginx:1 in the hope that nginx will not break their
        # configuration file format without bumping the
        # major version number.
        image: nginx:1
        # Always do a docker pull when starting this pod, as the nginx:1
        # tag gets updated whenever there is a new nginx version.
        imagePullPolicy: Always
        ports:
        - name: nginx-http
          containerPort: 80
        - name: nginx-https
          containerPort: 443
        volumeMounts:
        - name: nginx-config-storage
          mountPath: /etc/nginx/conf.d
          readOnly: true
        - name: nginx-ssl-storage
          mountPath: /etc/nginx/i3faq-ssl
          readOnly: true
      volumes:
      - name: nginx-config-storage
        secret:
          secretName: nginx-config
      - name: nginx-ssl-storage
        secret:
          secretName: nginx-ssl

You can see that I’m referring to two volumes which are called Secrets. This is because static read-only files are not yet supported by Kubernetes (Update 2016-07-11: ConfigMap is now available in Kubernetes). So, in order to bring the configuration and SSL certificates to the docker container, I’ve chosen to create a Secret for each of them. An alternative would be to create my own Docker container based on the official nginx container, and then add my configuration in there. I dislike that approach because it signs me up for additional maintenance: with the Secret injection method, I’ll just use the official nginx container, and nginx upstream will take care of version updates and security updates. For creating the Secret files, I’ve created a small Makefile:

all: nginx-config-secret.yaml nginx-ssl-secret.yaml

nginx-config-secret.yaml: static/faq.i3wm.org.conf
	./gensecret.sh nginx-config >$@
	echo "  faq.i3wm.org.conf: $(shell base64 -w0 static/faq.i3wm.org.conf)" >> $@

nginx-ssl-secret.yaml: static/faq.i3wm.org.startssl256-combined.crt static/faq.i3wm.org.startssl256.key static/dhparams.pem
	./gensecret.sh nginx-ssl > $@
	echo "  faq.i3wm.org.startssl256-combined.crt: $(shell base64 -w0 static/faq.i3wm.org.startssl256-combined.crt)" >> $@
	echo "  faq.i3wm.org.startssl256.key: $(shell base64 -w0 static/faq.i3wm.org.startssl256.key)" >> $@
	echo "  dhparams.pem: $(shell base64 -w0 static/dhparams.pem)" >> $@

The gensecret.sh script is just a simple file template:

#!/bin/sh
# gensecret.sh <name>
cat <<EOT
# vim:ts=2:sw=2:et:filetype=conf
apiVersion: v1
kind: Secret
metadata:
  name: $1
type: Opaque
data:
  # TODO: once either of the following two issues is fixed,
  # migrate away from secrets for configs:
  # - https://github.com/kubernetes/kubernetes/issues/1553
  # - https://github.com/kubernetes/kubernetes/issues/13610
EOT

Finally, we will also need a Service definition so that incoming connections can be routed to the Pod, regardless of where it lives. This will be nginx-svc.yaml:

# vim:ts=2:sw=2:et
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
  selector:
    app: nginx
  type: LoadBalancer

I then committed these files into the private git repository that GKE provides and used kubectl to bring it all up:

$ make
$ kubectl create -f nginx-config-secret.yaml
$ kubectl create -f nginx-ssl-secret.yaml
$ kubectl create -f nginx-rc.yaml
$ kubectl create -f nginx-svc.yaml

Because we’ve specified type: LoadBalancer in the Service definition, a static IP address will be allocated and can be obtained by using kubectl describe svc nginx. You should be able to access the nginx server on that IP address now.

Cost

I like to split the cost of running askbot on GKE into four chunks: the underlying VM instances (biggest chunk), the network load balancing (surprisingly big chunk), storage and all the rest, like network traffic.

Cost: VM instances

The cheapest GKE cluster you can technically run consists of three f1-micro nodes. If you try to start one with fewer nodes, you’ll get an error message: “ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=One f1-micro instance is not sufficient for a cluster with logging and/or monitoring enabled. Please use a larger machine-type, at least three f1-micro instances, or disable both logging and monitoring.”

However, three f1-micro nodes will not be enough to successfully run a web application such as askbot. The f1-micro instances don’t have a reserved CPU core, so you are using left-over capacity, and sometimes that capacity might not be enough. I have seen cases where askbot would not even start up within 10 minutes on an f1-micro instance. I definitely recommend you skip f1-micro instances and directly go with g1-small instances, the next bigger instance type.

For the g1-small instances, you can go as low as two machines. Also be specific about how much disk the machines need, otherwise you will end up with the default disk size of 100 GB, which might be unnecessary for your use case. I’ve used this command to create my cluster:

gcloud container clusters create i3-faq \
  --num-nodes=2 \
  --machine-type=g1-small \
  --disk-size=10

At 0.021 USD/hour for a g1-small instance that you run continuously, the VM instances will add up to about 30 USD/month.

Cost: Network load balancing

As explained before, the only way to get a static IP address for a service to which you can point your DNS records is to use Network Load Balancing. At 0.025 USD/hour, this adds up to about 18 USD/month.

Cost: Storage

While Persistent Disk comes in at 0.04 USD/GB/month, consider that a certain minimum size of a Persistent Disk volume is necessary in order to get a certain performance out of it: the Persistent Disk docs explain how a 250 GB volume size might be required to match the performance of a typical 7200 RPM SATA drive if you’re doing small random reads.

I ended up paying for 196 Gibibyte-months, which adds up to 7.88 USD. This included daily snapshots, of which I created one per day and always kept five around.

Cost: Conclusion

The rest, mostly network traffic, added up to 1 USD, but keep in mind that this instance was just set up and did not receive real user traffic.

In total, I was paying 57 USD/month.

Final thoughts

I thought my whole cloud experience was pretty polished overall, there were no big rough edges. Certainly, Kubernetes is usable right now (at least in its packaged form in Google Container Engine), and I have no doubts it will get better over time.

The features which GKE offers are exactly what I’m looking for, and Google offers them for a fraction of the price that it would cost me to build, and most importantly run, them myself. This applies to Persistent Disk, globally replicated snapshots, centralized logs, automatic live migration and more.

At the same time, I found it slightly too expensive for what is purely a hobby project. Especially the network load balancing increased the bill over the threshold of what I find acceptable for hosting a single application. If GKE becomes usable without the network load balancing or the prices will drop, I’d whole-heartedly recommend it for hobby projects.

by Michael Stapelberg at 20. December 2015 22:35:00

Configuring locales on Linux

While modern desktop environments that are used on Linux set up the locale with support for UTF-8, users who prefer not to run a desktop environment or users who use SSH to work on remote computers occasionally face trouble setting up their locale correctly.

On Linux, the locale is defined via the environment variables LANG and a bunch of environment variables starting with LC_, e.g. LC_MESSAGES.

Your system’s available locales

With locale -a, you can get a list of available locales, e.g.:

$ locale -a
C
C.UTF-8
de_CH.utf8
en_DK.utf8
en_US.utf8
POSIX

The format of these values is [language[_territory][.codeset][@modifier]], see wikipedia:Locale.

You can configure the locales that should be generated on your system in /etc/locale.gen. On Debian, sudo dpkg-reconfigure locales brings up a front-end for that configuration file which automatically runs locale-gen(8) after you’ve made changes.

The environment variables

  1. LC_ALL overrides all variables starting with LC_ and LANG.
  2. LANG is the fallback when more specific LC_ environment variables are not set.
  3. The individual LC_ variables are documented in The Single UNIX ® Specification.

Based on the above, my advice is to never set LC_ALL, set LANG to the locale you want to use and possibly override specific aspects using the relevant LC_ variables.

As an example, my personally preferred setup looks like this:

unset LC_ALL
export LANG=de_CH.UTF-8
export LC_MESSAGES=C
export LC_TIME=en_DK.UTF-8

This slightly peculiar setup sets LANG=de_CH.UTF-8 so that the default value corresponds to Switzerland, where I live.

Then, it specifies via LC_MESSAGES=C that I prefer program output to not be translated. This corresponds to programs having English output in all cases that are relevant to me, but strictly speaking you could come across programs whose untranslated language isn’t English, so maybe you’d prefer LC_MESSAGES=en_US.UTF-8.

Finally, via LC_TIME=en_DK.UTF-8, I configure date/time output to use the ISO8601 output, i.e. YYYY-mm-dd HH:MM:SS and a 24-hour clock.

Displaying the current locale setup

By running locale without any arguments, you can see the currently effective locale configuration:

$ locale
LANG=de_CH.UTF-8
LANGUAGE=
LC_CTYPE="de_CH.UTF-8"
LC_NUMERIC="de_CH.UTF-8"
LC_TIME=en_DK.UTF-8
LC_COLLATE="de_CH.UTF-8"
LC_MONETARY="de_CH.UTF-8"
LC_MESSAGES=C
LC_PAPER="de_CH.UTF-8"
LC_NAME="de_CH.UTF-8"
LC_ADDRESS="de_CH.UTF-8"
LC_TELEPHONE="de_CH.UTF-8"
LC_MEASUREMENT="de_CH.UTF-8"
LC_IDENTIFICATION="de_CH.UTF-8"
LC_ALL=

Where to set the environment variables?

Unfortunately, there is no single configuration file that allows you to set environment variables. Instead, each shell reads a slightly different set of configuration files, see wikipedia:Unix_shell#Configuration_files for an overview. If you’re unsure which shell you are using, try using readlink /proc/$$/exe.

Configuring the environment variables in the shell covers logins on the text console and via SSH, but you’ll still need to set the environment variables for graphical sessions. If you’re using a desktop environment such as GNOME, the desktop environment will configure the locale for you. If you’re using a window manager, you should be using an Xsession script (typically found in ~/.xsession or ~/.xinitrc).

To keep the configuration centralized, I recommend you create a file that you can include from both your shell config and your Xsession:

cat > ~/.my_locale_env <<'EOT'
unset LC_ALL
export LANG=de_CH.UTF-8
export LC_MESSAGES=C
export LC_TIME=en_DK.UTF-8
EOT

echo 'source ~/.my_locale_env' >> ~/.zshrc
sed -i '2isource ~/.my_locale_env' ~/.xsession

Remember to make these settings both on your local machines and on the machines you log into remotely.

Non-interactive SSH sessions

Notably, the above setup only covers interactive sessions. When you run e.g. ssh server ls /tmp, ssh will actually use a non-interactive non-login shell. For most shells, this means that the usual configuration files are not read.

In order for your locale setup to apply to non-interactive SSH commands as well, ensure that your SSH client is configured with SendEnv LANG LC_* to send the environment variables to the SSH server when connecting. On the server, you’ll need to have AcceptEnv LANG LC_* configured. Recent versions of OpenSSH include these settings by default in /etc/ssh/ssh_config and /etc/ssh/sshd_config, respectively. If that’s not the case on your machine, use echo "SendEnv LANG LC_*" >> ~/.ssh/config.

To verify which variables are getting sent, run SSH with the -v flag and look for the line “debug1: Sending environment.”:

$ ssh localhost env
[…]
debug1: Sending environment.
debug1: Sending env LC_TIME = en_DK.UTF-8
debug1: Sending env LANG = de_CH.UTF-8
debug1: Sending env LC_MESSAGES = C
debug1: Sending command: env
[…]

Debugging locale issues

You can introspect the locale-related environment variables of any process by inspecting the /proc/${PID}/environ file (where ${PID} stands for the process id of the process). As an example, this is how you verify your window manager is using the expected configuration, provided you use i3:

$ tr '\0' '\n' < /proc/$(pidof i3)/environ | grep -e '^\(LANG\|LC_\)'
LC_MESSAGES=C
LANG=de_CH.UTF-8
LC_TIME=en_DK.UTF-8

In order for Unicode text input to work, your terminal emulator (e.g. urxvt) and the program you are using inside it (e.g. your shell, like zsh, or a terminal multiplexer like screen, or a chat program like irssi, etc.) both should use a locale whose codeset is UTF-8.

A good end-to-end test could be to run the following perl command:

$ perl -MPOSIX -Mlocale -e 'print strftime("%c", localtime) . "\n"'
2015-12-20T16:22:09 CET

In case your locales are misconfigured, perl will complain loudly:

$ LC_TIME=en_AU.UTF-8 perl -MPOSIX -Mlocale -e 'print strftime("%c", localtime) . "\n"'
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_TIME = "en_AU.UTF-8",
	LC_MESSAGES = "C",
	LANG = "de_CH.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("de_CH.UTF-8").
Son 20 Dez 2015 16:22:58 CET

by Michael Stapelberg at 20. December 2015 16:30:00

02. December 2015

sECuREs Webseite

Prometheus Alertmanager meta-monitoring

I’m happily using Prometheus for monitoring and alerting since about a year.

Regardless of the monitoring system, one problem that I was uncertain of how to solve it in a good way used to be meta-monitoring: if you have a monitoring system, how do you know that the monitoring system itself is running? You’ll need another level of monitoring/alerting (hence “meta-monitoring”).

Recently, I realized that I could use Gmail for meta-monitoring: Google Apps Script allows users to run JavaScript code periodically that has access to Gmail and other Google apps. That way, I can have a cronjob which looks for emails from my monitoring/alerting infrastructure, and if there are none for 2 days, I get an alert email from that script.

That’s a rather simple way of having an entirely different layer of monitoring code, so that the two monitoring systems don’t suffer from a common bug. Further, the code is running on Google servers, so hardware failures of my monitoring system don’t affect it.

The rest of this article walks you through the setup, assuming you’re already using Prometheus, Alertmanager and Gmail.

Installing the meta-monitoring Google Apps Script

See the “Your first script” instructions for how to create a new Google Apps Script file. Then, use the following code, of course replacing the email addresses of your Alertmanager instance and your own email address:

// vim:ts=2:sw=2:et:ft=javascript
// Licensed under the Apache 2 license.
// © 2015 Google Inc.

// Runs every day between 07:00 and 08:00 in the local time zone.
function checkAlertmanager() {
  // Find all matching email threads within the last 2 days.
  // Should result in 2 threads, unless something went wrong.
  var search_atoms = [
    'from:alertmanager@example.org',
    'subject:daily alert test',
    'newer_than:2d',
  ];
  var threads = GmailApp.search(search_atoms.join(' '));
  if (threads.length === 0) {
    GmailApp.sendEmail(
      'michael@example.org',
      'ALERT: alertmanager test mail is missing for 2d',
      'Subject says it all');
  }
}

In the menu, select “Resources → Current project’s triggers”. Click “Add a new trigger”, select “Time-driven”, “Day timer” and set the time to “7am to 8am”. This will make script run every day between 07:00 and 08:00. The time doesn’t really matter, but you need to specify something. I went for the 07:00-08:00 timespan because that’s shortly before I typically get up, so likely I’ll be presented with the freshest results just when I get up.

You can now either wait a day for the trigger to fire, or you can select the checkAlertmanager function in the “Run” menu to run it right away. You should end up with an email in your inbox, notifying you that the daily alert test is missing, which is expected since we did not configure it yet :).

Configuring a daily test alert in Prometheus

Create a file called dailytest.rules with the following content:

ALERT DailyTest
  IF vector(1) > 0
  FOR 1m
  LABELS {
    job = "dailytest",
  }
  ANNOTATIONS {
    summary = "daily alert test",
    description = "daily alert test",
  }

Then, include it in your Prometheus config’s rules section. After restarting Prometheus or sending it a SIGHUP signal, you should see the new alert on the /alerts status page:

prometheus daily alert

Configuring Alertmanager

In your Alertmanager configuration, you’ll need to specify where that alert should be delivered to and how often it should repeat. I suggest you add a notification_config that you’ll use specifically for the daily alert test and nothing else, so that you never accidentally change something:

route:
  group_by: ['alertname']
  group_wait: 30s
  group_interval: 30s
  repeat_interval: 1h
  receiver: team-X-pager

  routes:
  - match:
      job: dailytest
    receiver: dailytest
    repeat_interval: 1d

receivers:
- name: 'dailytest'
  email_configs:
  - to: 'michael+alerts@example.org'

Send Alertmanager a SIGHUP signal to make it reload its configuration file. After Prometheus has been running for a minute, you should see the following alert on your Alertmanager’s /alerts status page:

prometheus alertmanager alert

Adding a Gmail filter to hide daily test alerts

Finally, once you verified everything is working, add a filter so that the daily test alerts don’t clutter your Gmail inbox: put “from:(alertmanager@example.org) subject:(DailyTest)” into the search box, click the drop-down icon, click “Create filter with this search”, select “Skip the Inbox”.

gmail filter screenshot

by Michael Stapelberg at 02. December 2015 09:50:00

11. November 2015

RaumZeitLabor

Stick-Seminar #2

Am Samstag den 05. Dezember 2015, 10-23 Uhr gibt es ein Seminar, bei dem die Teilnehmer lernen eigene Stickmotive zu erstellen, diese auf (fast) beliebige Textilien zu sticken.

Stickerei

Das Seminar ist kostenlos bis auf Verbrauchsmaterial: 20 Cent pro 1000 Stiche und ggf. T-Shirts (5€) / Hoodies (30€) aus dem RZL-Bestand, es können aber selbstverständlich auch eigene Textilien bestickt werden. Wer sich vor dem 01.12. anmeldet bekommt Kuchen und seine Motive zuerst gestickt.

Es lassen sich nicht nur die üblichen Verdächtigen (Pullover, T-Shirts, beliebige sonstige Kleidung) besticken, ich habe auch schon Motive auf Polyester-Microfaser gestickt und daraus Kopfkissenbezüge genäht. Dabei kann ich dann auch helfen, wenn das jemand machen will.

Es sind keine Vorkenntnisse notwendig, es ist aber sinnvoll vor dem Workshop die aktuelle Version von Inkscape zu installieren und die ersten beiden Anleitungen durchzugehen, außerdem gibt es im RZL-Wiki eine Anleitung zu dem Thema, das ist aber alles optional.

Ich bestelle nächsten Dienstag (17.11.) um 20 Uhr Stoffe bei stoffe-tippel.de und bei diesem Shop, weil leider beide nicht alle Farben haben, die ich gerne hätte. Wenn jemand etwas mitbestellen will schickt mir eine E-Mail. Für einen Kopfkissenbezug braucht man dann noch einen Reißverschluss, die kann ich besorgen, kosten knapp 6€.

by Alexander at 11. November 2015 00:00:00

05. October 2015

RaumZeitLabor

Stickmotive selbst erstellen

Vorlage und Resultat

Die im RZL verwendete Sticksoftware erlaubt den Import von Rastergrafiken, dazu gibt es auch seit April 2012 eine Anleitung.

Diese Methode hat Vor- und Nachteile: Es geht vergleichsweise schnell und erzeugt meistens schöne Resultate, erlaubt aber wenig Kontrolle über das Ergebnis und für perfekte Stickdateien muss man relativ viel nachbessern.

Wenn man die Vorlage als SVG hat kann man Vorteile des Formats ausnutzen:

  • Die Konturen sind exakt bekannt und müssen nicht automatisch vektorisiert werden.
  • Die Reihenfolge der Objekte ist exakt bekannt, so dass Konturen automatisch nach Flächenfüllungen gestickt werden und überlappende Teile in der richtigen Reihenfolge gestickt werden.
  • Man kann in der Vektorgrafik teilweise schon vorgeben, welche Teile mit welchem Stichtyp realisiert werden sollen.

Deshalb benutze ich seit einiger Zeit ausschließlich diese Methode. Bei meinem letzten Stickprojekt habe ich alle Schritte genau protokolliert und eine ausführliche Anleitung geschrieben.

by Alexander at 05. October 2015 00:00:00

02. October 2015

mfhs Blog

51th Symposium on Theoretical Chemistry (Potsdam)

Last week I went to the annual Symposium on Theoretical Chemistry. This time the 51th STC took place from the 20th till the 24th of September in Potsdam.

Unlike last year, where the conference business was all new to me, this year it was more like returning to a place with a familiar atmosphere and many familiar faces. Whilst the conference surely was again a great opportunity to present my work and to learn about theoretical chemistry in the lectures, this time it had the additional aspect of catching up with the people I already met last year. Most of all I enjoyed the poster sessions this year, probably because of the many discussions I had with other researchers and PhD students about their recent advances.

Concerning our project we still were not able to obtain noteworthy results with our current implementation of finite-element Hartree-Fock (see Poster attached below). This is mainly due to the extreme memory requirements the calculation of the Hartree-Fock exchange matrix imposes on the program: Right now we store this whole beast in memory and hence get quadratic scaling with respect to the number of finite elements in the amount of memory we require. In other words even for extremely small test cases we need gigabytes of memory to achieve only very inadequate accuracy.

Together with James Avery, who visited us for a few days in July, we recently managed to come up with a new scheme for implementing Hartree-Fock exchange within the finite-element method. This looks very promising, since it decreases both the memory as well as the computational cost to linear scaling. Right now we struggle with the implementation, however. In November I will visit James in Kopenhagen for 3 weeks. If all goes well we will hopefully overcome these problems and work out a good structure for a FE-HF program, that incorporates everything we learned so far.

Link Licence
Poster P42 STC 2015 Creative Commons License

by mfh at 02. October 2015 00:00:13

01. October 2015

Moredreads Blog

30. September 2015

mfhs Blog

[c¼h] Einführung in die Elektronenstrukturtheorie

The week before my bash scripting course I gave another short talk for the weekly meeting of the Heidelberg Chaostreff NoName e.V. Unlike last time the talk was not concerned with a traditional "Hacker" topic, but much rather I tried to give a brief introduction into my own research field.

Of cause 15 to 20 minutes are not enough to go deep into finite-element Hartree-Fock, so I ended up giving a small introduction into electronic structure theory instead. Questions which were addressed:

  • Why is electronic structure theory useful? What kind of information can I retrieve using it?
  • What are the fundamental physical and chemical concepts that lead to electronic structure theory?
  • What kind of approximations and physical ideas do we need to do in order to get to a method which is used in practice, e.g. the so-called Hartree-Fock method?

The talk was held in German and a recording is available (see below). Please be aware, however, that not everything I mention is, scientifically speaking, correct.

by mfh at 30. September 2015 00:00:33

24. September 2015

RaumZeitLabor

Agenda Diplom 2015 im RaumZeitLabor

Nachwuchs-Laboranten lernen Löten

Löten lernen

Wie bereits in den Jahren zuvor haben wir vom RaumZeitLabor auch diesmal wieder am Agenda Diplom der Stadt Mannheim teilgenommen. An insgesamt fünf Terminen besuchten uns je 12 Kinder zum Basteln einer Schatzkiste. Das Besondere: Die Kiste kann nur geöffnet werden, wenn man mit dem richtigen Code anklopft.

Klopfbox

Die Schaltung wurde mit einem Arduino Pro Mini aufgebaut. Für die Aufnahme der Klopfsignale kam ein Piezo-Schallwandler zum Einsatz und für den Schließmechanismus ein Micro-Servo. Die Holzschatulle sollte eigentlich mit unserem Lasercutter “FazZz0r” produziert werden. Wegen technischer Schwierigkeiten mit der Laser-Röhre konnte die Maschine aber nicht betrieben werden und befreundete Makerspaces in Stuttgart, Karlsruhe und Wuppertal haben uns prima ausgeholfen.

by Ingo at 24. September 2015 00:00:00

19. September 2015

RaumZeitLabor

Hearthstone – Barstone#2: The Grand Tourney

Am 26.9.2015 ab 18h verwandelt sich das RaumZeitLabor wieder in die Taverne »Zum güldenen Einhorn«! Die Veröffentlichung der zweiten HearthStone-Erweiterung »Das Große Turnier« ist der Anlass um in glorreichem Wettstreit den besten HearthStone-Spieler auszumachen.

2. BarStone

Zieht euch einen Stuhl heran, bereitet euer Deck vor und lasst uns eine Runde Hearthstone spielen. Wir laden alle Spieler, egal ob Hearthstone-Neulinge oder Turnierveteranen, ein mit uns einen gemütlichen Abend am digitalen Lagerfeuer zu verbringen. Lerne neue Leute kennen, tausch dich mit anderen Spielern aus und schau dir ein paar Tricks ab! Auch abseits des Turniers wird fleißig Hearthstone gespielt.

Wenn du mitmachen möchtest, lade die kostenlose Hearthstone-App runter und finde dich bis 18h im RaumZeitLabor ein. Eine Voranmeldung per Facebook oder Mail an anmeldung@raumzeitlabor.de erleichert uns die Planung.

An unserem Feuer ist noch ein Plätzchen für dich frei!

by Cheatha at 19. September 2015 00:00:00

12. September 2015

koebi

Grundlegende Linux-Befehle

Ich bin an meiner Uni für den von der Fachschaft veranstalteten Programmiervorkurs für absolut blutige Anfänger verantwortlich. In diesem Zuge wollte ich - um einen Platz zu haben, auf dem das mal übersichtlich steht - mal die wichtigsten Linux-Befehle, die man als Anfän

12. September 2015 00:00:00

08. September 2015

RaumZeitLabor

MRMCD 2015: Basketflaschen

Liebe Sportsfreunde,

MRMCD 2015 ESA-Piktogramm

ganz stolz und getreu dem Motto “Schneller. Höher. Weiter.” können wir euch unser neustes Sportgerät präsentieren, mit dem ihr schneller, höher und weiter kommt: Den Flaschenkorb 2.1 (Next Generation Basketbottle):

MRMCD 2015 Basketballkorb mit Flaschensammler

Vorbei sind die Zeiten der Konferenztische voller leerer Einwegpfandflaschen, denn ab sofort kann jede Flasche bequem vom Sitz aus in den zugehörigen Behälter geworfen werden. Wir warten noch auf die Freigabe der Konstruktion durch den TÜV Süd, bis dahin bitten wir darum von Dunking-Versuchen abzusehen, die Schrauben könnten ausreißen.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

by Alexander at 08. September 2015 00:00:00

29. August 2015

RaumZeitLabor

Seminar: Bettwäsche selbst besticken und nähen

Am Samstag den 10. Oktober 2015, 10-23 Uhr gibt es ein Seminar, bei dem die Teilnehmer lernen eigene Stickmotive zu erstellen, diese auf Polyester-Microfaser auszusticken und daraus dann Kopfkissen- oder Deckenbezüge mit verdecktem Reißverschluss zu nähen:

Verdeckter Reißverschluss

Wer teilnehmen will muss sich bis zum 26.09.2015 23:59 per E-Mail bei mir anmelden. Die Zahl der Teilnehmer ist begrenzt, bei mehr Anmeldungen als Plätzen entscheidet das Los.

Für einen normalen Kopfkissenbezug braucht man 2m Stoff (14€) und einen Reißverschluss mit Schieber (5€). Für eine Bettdecke braucht man 5m Stoff (35€) und einen langen Reißverschluss mit Schieber (6€). Für Applikationen braucht man zusätzlich 1m Stoff pro Farbe (7€), da kann man ggf. Reststücke nehmen falls mehrere Teilnehmer die gleichen Farben brauchen.

Es sind keine Vorkenntnisse notwendig, es ist aber sinnvoll vor dem Workshop die aktuelle Version von Inkscape zu installieren und die ersten beiden Anleitungen durchzugehen, außerdem gibt es im RZL-Wiki eine Anleitung zu dem Thema, das ist aber alles optional. Wenn wir am Samstag nicht fertig werden wird der Workshop am Sonntag ab 12 Uhr fortgesetzt.

Die Anmeldung muss folgende Angaben enthalten:

  • Typ (Kopfkissen- oder Deckenbezug).
  • Abmessungen.
  • Gewünschte Trägerfarbe, ggf. Applikationsfarben. Ihr könnt auch selbst Stoff besorgen, wenn euch das lieber ist. Der Microfaser-Stoff hat einen seidigen, glatten Griff und fühlt sich auf der Haut angenehm kühl an.
  • Gewünschte Reißverschluss-Farbe (ist ggf. nicht verfügbar, dann nehme ich eine ähnliche, sieht man am Ende sowieso nicht).
  • Das gewünschte Stickmotiv, damit ich mir das vorher ansehen kann, ob das überhaupt realisierbar ist.

Hier sind zwei Beispiele, bei denen ich nachleuchtendes und normales Garn kombiniert habe:

Nachleuchtendes Stickmotiv, das im Dunkeln die Augen schließt

Bei dem zweiten Beispiel habe ich Körper- und Mähnenfüllung als Applikation realisiert:

Nachleuchtendes Stickmotiv mit Applikationen

by Alexander at 29. August 2015 00:00:00

16. August 2015

mfhs Blog

Advanced bash scripting block course

Currently I am busy preparing the lecture notes and the exercises for the advanced bash scripting course that I will teach for PhD students of my graduate school. The course will be a block course running from 24th till 28th August and there are still some spaces. So in case you are interested in learning how to write shell scripts in a structured way, you will find more information here. You can obtain the course material either from this page or from my github account mfherbst.

by mfh at 16. August 2015 00:00:52