Planet NoName e.V.

13. November 2017

sECuREs Webseite

Network setup for our retro computing event RGB2Rv17

Our computer association NoName e.V. organizes a retro computing event called RGB2R every year, located in Heidelberg, Germany. This year’s version is called RGB2Rv17.

This article describes the network setup I created for this year’s event.

The intention is not so much to provide a fully working setup (even though the setup did work fine for us as-is), but rather inspire to you to create your own network, based vaguely on what’s provided here.

Connectivity

The venue has a DSL connection with speeds reaching 1 Mbit/s if you’re lucky. Needless to say, that is not sufficient for the about 40 participants we had.

Luckily, there is (almost) direct line of sight to my parent’s place, and my dad recently got a 400 Mbit/s cable internet connection, which he’s happy to share with us :-).

WiFi antenna pole

Hardware

For the WiFi links to my parent’s place, we used 2 tp-link CPE510 (CPE stands for Customer Premise Equipment) on each site. The devices only have 100 Mbit/s ethernet ports, which is why we used two of them.

The edge router for the event venue was a PC Engines apu2c4. For the Local Area Network (LAN) within the venue, we provided a few switches and WiFi using Ubiquiti Networks access points.

Software

On the apu2c4, I installed Debian “stretch” 9, the latest Debian stable version at the time of writing. I prepared a USB thumb drive with the netinst image:

% wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-9.2.1-amd64-netinst.iso
% cp debian-9.2.1-amd64-netinst.iso /dev/sdb

Then, I…

  • plugged the USB thumb drive into the apu2c4
  • On the serial console, pressed F10 (boot menu), then 1 (boot from USB)
  • In the Debian installer, selected Help, pressed F6 (special boot parameters), entered install console=ttyS0,115200n8
  • installed Debian as usual.

Initial setup

Debian stretch comes with systemd by default, but not with systemd-networkd(8) by default, so I changed that:

edge# systemctl enable systemd-networkd
edge# systemctl disable networking

Also, I cleared the MOTD, placed /tmp on tmpfs and configured my usual environment:

edge# echo > /etc/motd
edge# echo 'tmpfs /tmp tmpfs defaults 0 0' >> /etc/fstab
edge# wget -qO- https://d.zekjur.net | bash -s

I also installed a few troubleshooting tools which came in handy later:

edge# apt install tcpdump net-tools strace

Disabling ICMP rate-limiting for debugging

I had to learn the hard way that Linux imposes a rate-limit on outgoing ICMP packets by default. This manifests itself as spurious timeouts in the traceroute output. To ease debugging, I disabled the rate limit entirely:

edge# cat >> /etc/sysctl.conf <<'EOT'
net.ipv4.icmp_ratelimit=0
net.ipv6.icmp.ratelimit=0
EOT
edge# sysctl -p

Renaming network interfaces

Descriptive network interface names are helpful when debugging. I won’t remember whether enp0s3 is the interface for an uplink or the LAN, so I assigned the names uplink0, uplink1 and lan0 to the apu2c4’s interfaces.

To rename network interfaces, I created a corresponding .link file, had the initramfs pick it up, and rebooted:

edge# cat >/etc/systemd/network/10-uplink0.link <<'EOT'
[Match]
MACAddress=00:0d:b9:49:db:18

[Link]
Name=uplink0
EOT
edge# update-initramfs -u
edge# reboot

Network topology

Because our internet provider didn’t offer IPv6, and to keep my dad out of the loop in case any abuse issues should arise, we tunneled all of our traffic.

We decided to set up one tunnel per WiFi link, so that we could easily load-balance over the two links by routing IP flows into one of the two tunnels.

Here’s a screenshot from the topology dashboard which I made using the Diagram Grafana plugin:

Network interface setup

We configured IP addresses statically on the uplink0 and uplink1 interface because we needed to use static addresses in the tunnel setup anyway.

Note that we placed a default route in route table 110. Later on, we used iptables(8) to make traffic use either of these two default routes.

edge# cat > /etc/systemd/network/uplink0.network <<'EOT'
[Match]
Name=uplink0

[Network]
Address=192.168.178.10/24
IPForward=ipv4

[Route]
Gateway=192.168.178.1
Table=110
EOT
edge# cat > /etc/systemd/network/uplink1.network <<'EOT'
[Match]
Name=uplink1

[Network]
Address=192.168.178.11/24
IPForward=ipv4

[Route]
Gateway=192.168.178.1
Table=111
EOT

Tunnel setup

Originally, I configured OpenVPN for our tunnels. However, it turned out the apu2c4 tops out at 130 Mbit/s of traffic through OpenVPN. Notably, using two tunnels didn’t help — I couldn’t reach more than 130 Mbit/s in total. This is with authentication and crypto turned off.

This surprised me, but doesn’t seem too uncommon: on the internet, I could find reports of similar speeds with the same hardware.

Given that our setup didn’t require cryptography (applications are using TLS these days), I looked for light-weight alternatives and found Foo-over-UDP (fou), a UDP encapsulation protocol supporting IPIP, GRE and SIT tunnels.

Each configured Foo-over-UDP tunnel only handles sending packets. For receiving, you need to configure a listening port. If you want two machines to talk to each other, you therefore need a listening port on each, and a tunnel on each.

Note that you need one tunnel per address family: IPIP only supports IPv4, SIT only supports IPv6. In total, we ended up with 4 tunnels (2 WiFi uplinks with 2 address families each).

Also note that Foo-over-UDP provides no authentication: anyone who is able to send packets to your configured listening port can spoof any IP address. If you don’t restrict traffic in some way (e.g. by source IP), you are effectively running an open proxy.

Tunnel configuration

First, load the kernel modules and set the corresponding interfaces to UP:

edge# modprobe fou
edge# modprobe ipip
edge# ip link set dev tunl0 up
edge# modprobe sit
edge# ip link set dev sit0 up

Configure the listening ports for receiving FOU packets:

edge# ip fou add port 1704 ipproto 4
edge# ip fou add port 1706 ipproto 41

edge# ip fou add port 1714 ipproto 4
edge# ip fou add port 1716 ipproto 41

Configure the tunnels for sending FOU packets, using the local interface of the uplink0 interface:

edge# ip link add name fou0v4 type ipip remote 203.0.113.1 local 192.168.178.10 encap fou encap-sport auto encap-dport 1704 dev uplink0
edge# ip link set dev fou0v4 up
edge# ip -4 address add 10.170.0.1/24 dev fou0v4

edge# ip link add name fou0v6 type sit remote 203.0.113.1 local 192.168.178.10 encap fou encap-sport auto encap-dport 1706 dev uplink0
edge# ip link set dev fou0v6 up
edge# ip -6 address add fd00::10:170:0:1/112 dev fou0v6 preferred_lft 0

Repeat for the uplink1 interface:

# (IPv4) Set up the uplink1 transmit tunnel:
edge# ip link add name fou1v4 type ipip remote 203.0.113.1 local 192.168.178.11 encap fou encap-sport auto encap-dport 1714 dev uplink1
edge# ip link set dev fou1v4 up
edge# ip -4 address add 10.171.0.1/24 dev fou1v4

# (IPv6) Set up the uplink1 transmit tunnel:
edge# ip link add name fou1v6 type sit remote 203.0.113.1 local 192.168.178.11 encap fou encap-sport auto encap-dport 1716 dev uplink1
edge# ip link set dev fou1v6 up
edge# ip -6 address add fd00::10:171:0:1/112 dev fou1v6 preferred_lft 0

Load-balancing setup

In previous years, we experimented with setups using MLVPN for load-balancing traffic on layer 2 across multiple uplinks. Unfortunately, we weren’t able to get good results: when aggregating links, bandwidth would be limited to the slowest link. I expect that MLVPN and others would work better this year, if we were to set it up directly before and after the WiFi uplinks, as the two links should be almost identical in terms of latency and throughput.

Regardless, we didn’t want to take any chances and decided to go with IP flow based load-balancing. The downside is that any individual connection can never be faster than the uplink over which it is routed. Given the number of concurrent connections in a typical network, in practice we observed good utilization of both links regardless.

Let’s tell iptables to mark packets coming from the LAN with one of two values based on the hash of their source IP, source port, destination IP and destination port properties:

edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -j HMARK --hmark-tuple src,sport,dst,dport --hmark-mod 2 --hmark-offset 10 --hmark-rnd 0xdeadbeef

Note that the --hmark-offset parameter is required: mark 0 is the default, so you need an offset of at least 1.

For debugging, it is helpful to exempt the IP addresses we use on the tunnels themselves, otherwise we might not be able to ping an endpoint which is actually reachable:

edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 10.170.0.0/24 -m comment --comment "for debugging" -j MARK --set-mark 10
edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 10.171.0.0/24 -m comment --comment "for debugging" -j MARK --set-mark 11

Now, we need to add a routing policy to select the correct default route based on the firewall mark:

edge# ip -4 rule add fwmark 10 table 10
edge# ip -4 rule add fwmark 11 table 11

The steps for IPv6 are identical.

Note that current OpenWrt (15.05) does not provide the HMARK iptables module. I filed a GitHub issue with OpenWrt.

Connectivity for the edge router

Because our default routes are placed in table 110 and 111, the router does not have upstream connectivity. This is mostly working as intended, as it makes it harder to accidentally route traffic outside of the tunnels.

There is one exception: we need a route to our DNS server:

edge# ip -4 rule add to 8.8.8.8/32 lookup 110

It doesn’t matter which uplink we use for that, since DNS traffic is tiny.

Connectivity to the tunnel endpoint

Of course, the tunnel endpoint itself must also be reachable:

edge# ip rule add fwmark 110 lookup 110
edge# ip rule add fwmark 111 lookup 111

edge# iptables -t mangle -A OUTPUT -d 203.0.113.1/32 -p udp --dport 1704 -j MARK --set-mark 110
edge# iptables -t mangle -A OUTPUT -d 203.0.113.1/32 -p udp --dport 1714 -j MARK --set-mark 111
edge# iptables -t mangle -A OUTPUT -d 203.0.113.1/32 -p udp --dport 1706 -j MARK --set-mark 110
edge# iptables -t mangle -A OUTPUT -d 203.0.113.1/32 -p udp --dport 1716 -j MARK --set-mark 111

Connectivity to the access points

By clearing the firewall mark, we ensure traffic doesn’t get sent through our tunnel:

edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 192.168.178.250 -j MARK --set-mark 0 -m comment --comment "for debugging"
edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 192.168.178.251 -j MARK --set-mark 0 -m comment --comment "for debugging"
edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 192.168.178.252 -j MARK --set-mark 0 -m comment --comment "for debugging"
edge# iptables -t mangle -A PREROUTING -s 10.17.0.0/24 -d 192.168.178.253 -j MARK --set-mark 0 -m comment --comment "for debugging"

Also, since the access points are all in the same subnet, we need to tell Linux on which interface to send the packets, otherwise packets might egress on the wrong link:

edge# ip -4 route add 192.168.178.252 dev uplink0 src 192.168.178.10
edge# ip -4 route add 192.168.178.253 dev uplink1 src 192.168.178.11

MTU configuration

edge# ifconfig uplink0 mtu 1472
edge# ifconfig uplink1 mtu 1472
edge# ifconfig fou0v4 mtu 1416
edge# ifconfig fou0v6 mtu 1416
edge# ifconfig fou1v4 mtu 1416
edge# ifconfig fou1v6 mtu 1416

It might come in handy to quickly be able to disable an uplink, be it for diagnosing issues, performing maintenance on a link, or to work around a broken uplink.

Let’s create a separate iptables chain in which we can place temporary overrides:

edge# iptables -t mangle -N prerouting_override
edge# iptables -t mangle -A PREROUTING -j prerouting_override
edge# ip6tables -t mangle -N prerouting_override
edge# ip6tables -t mangle -A PREROUTING -j prerouting_override

With the following shell script, we can then install such an override:

#!/bin/bash
# vim:ts=4:sw=4
# enforces using a single uplink
# syntax:
#	./uplink.sh 0  # use only uplink0
#	./uplink.sh 1  # use only uplink1
#	./uplink.sh    # use both uplinks again

if [ "$1" = "0" ]; then
	# Use only uplink0
	MARK=10
elif [ "$1" = "1" ]; then
	# Use only uplink1
	MARK=11
else
	# Use both uplinks again
	iptables -t mangle -F prerouting_override
	ip6tables -t mangle -F prerouting_override
	ip -4 rule del to 8.8.8.8/32
	ip -4 rule add to 8.8.8.8/32 lookup "110"
	exit 0
fi

iptables -t mangle -F prerouting_override
iptables -t mangle -A prerouting_override -s 10.17.0.0/24 -j MARK --set-mark "${MARK}"
ip6tables -t mangle -F prerouting_override
ip6tables -t mangle -A prerouting_override -j MARK --set-mark "${MARK}"

ip -4 rule del to 8.8.8.8/32
ip -4 rule add to 8.8.8.8/32 lookup "1${MARK}"

MSS clamping

Because Path MTU discovery is often broken on the internet, it’s best practice to limit the Maximum Segment Size (MSS) of each TCP connection, achieving the same effect (but only for TCP connections).

This technique is called “MSS clamping”, and can be implemented in Linux like so:

edge# iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o fou0v4 -j TCPMSS --clamp-mss-to-pmtu
edge# iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o fou1v4 -j TCPMSS --clamp-mss-to-pmtu
edge# ip6tables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o fou0v6 -j TCPMSS --clamp-mss-to-pmtu
edge# ip6tables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o fou1v6 -j TCPMSS --clamp-mss-to-pmtu

Traffic shaping

Shaping upstream

With asymmetric internet connections, such as the 400/20 cable connection we’re using, it’s necessary to shape traffic such that the upstream is never entirely saturated, otherwise the TCP ACK packets won’t reach their destination in time to saturate the downstream.

While the FritzBox might already provide traffic shaping, we wanted to voluntarily restrict our upstream usage to leave some headroom for my parents.

Hence, we’re shaping each uplink to 8 Mbit/s, which sums up to 16 Mbit/s, well below the available 20 Mbit/s:

edge# tc qdisc replace dev uplink0 root tbf rate 8mbit latency 50ms burst 4000
edge# tc qdisc replace dev uplink1 root tbf rate 8mbit latency 50ms burst 4000

The specified latency value is a best guess, and the burst value is derived from the kernel internal timer frequency (CONFIG_HZ) (!), packet size and rate as per https://unix.stackexchange.com/questions/100785/bucket-size-in-tbf.

Tip: keep in mind to disable shaping temporarily when you’re doing bandwidth tests ;-).

Shaping downstream

It’s somewhat of a mystery to me why this helped, but we achieved noticeably better bandwidth (50 Mbit/s without, 100 Mbit/s with shaping) when we also shaped the downstream traffic (i.e. made the tunnel endpoint shape traffic).

LAN

For DHCP, DNS and IPv6 router advertisments, we set up dnsmasq(8), which worked beautifully and was way quicker to configure than the bigger ISC servers:

edge# apt install dnsmasq
edge# cat > /etc/dnsmasq.d/rgb2r <<'EOT'
interface=lan0
dhcp-range=10.17.0.10,10.17.0.250,30m
dhcp-range=::,constructor:lan0,ra-only
enable-ra
cache-size=10000
EOT

Monitoring

First, install and start Prometheus:

edge# apt install prometheus prometheus-node-exporter prometheus-blackbox-exporter
edge# systemctl enable prometheus
edge# systemctl restart prometheus
edge# systemctl enable prometheus-node-exporter
edge# systemctl restart prometheus-node-exporter
edge# systemctl enable prometheus-blackbox-exporter
edge# systemctl restart prometheus-blackbox-exporter

Then, install and start Grafana:

edge# apt install apt-transport-https
edge# wget -qO- https://packagecloud.io/gpg.key | apt-key add -
edge# echo deb https://packagecloud.io/grafana/stable/debian/ stretch main > /etc/apt/sources.list.d/grafana.list
edge# apt update
edge# apt install grafana
edge# systemctl enable grafana-server
edge# systemctl restart grafana-server

Also, install the excellent Diagram Grafana plugin:

edge# grafana-cli plugins install jdbranham-diagram-panel
edge# systemctl restart grafana-server

Config files

I realize this post contains a lot of configuration excerpts which might be hard to put together. So, you can find all the config files in a git repository. As I mentioned at the beginning of the article, please create your own network and don’t expect the config files to just work out of the box.

Statistics

  • We peaked at about 60 active DHCP leases.

  • The connection tracking table (holding an entry for each IPv4 connection) never exceeded 4000 connections.

  • DNS traffic peaked at about 12 queries/second.

  • dnsmasq’s maximum cache size of 10000 records was sufficient: we did not have a single cache eviction over the entire event.

  • We were able to obtain peaks of over 150 Mbit/s of download traffic.

  • At peak, about 10% of our traffic was IPv6.

WiFi statistics

  • On link 1, our signal to noise ratio hovered between 31 dBm to 33 dBm. When it started raining, it dropped by 2-3 dBm.

  • On link 2, our signal to noise ratio hovered between 34 dBm to 36 dBm. When it started raining, it dropped by 1 dBm.

Despite the relatively bad signal/noise ratios, we could easily obtain about 140 Mbps on the WiFi layer, which results in 100 Mbps on the ethernet layer.

The difference in signal/noise ratio between the two links had no visible impact on bandwidth, but ICMP probes showed measurably more packet loss on link 1.

by Michael Stapelberg at 13. November 2017 21:45:00

10. November 2017

michael-herbst.com

Lazy matrices talk from the IWR school 2017

As mentioned in a previous post on this matter last month, from 2nd to 6th October, the IWR hosted the school Mathematical Methods for Quantum Chemistry, which I co-organised together with my supervisors Andreas Dreuw and Guido Kanschat as well as the head of my graduate school Michael Winckler.

From my personal point of view the school turned out to be a major success, where I had the chance to meet a lot of interesting people and got a bucket of ideas to try out in the future. The feedback we got from the participants was positive as well and so I am very happy that all the effort in the past half a year really turned to be worth the while for all of us.

Even though all relevant documents from the school, including the slides from the lectures and most contributed talks, are finally published at the school's website, I nevertheless want to include a pointer to the slides of my lazy matrices talk from this blog for reference.

The topic and structure of the talk is very similar to the talks of the previous months. I motivate the use of contraction-based methods from the realisation, that storing intermediate computational results in memory can be less optimal than recomputing them in a clever way when needed. Then I present lazy matrices as a solution to the issue that the code needed for performing calculations in the sense of contraction-based methods can become very complicated. Afterwards I hint how we use lazy matrices in the context of the quantum chemistry program molsturm and how molsturm itself really facilitates the implementation of new quantum-chemical methods. For this I show in slide 20 a comparison of parts of a working Coupled-Cluster doubles (CCD) code based on molsturm with the relevant part of the equation for the CCD residual.

Link Licence
Lazy matrices for contraction-based algorithms (IWR school talk) Creative Commons License

by Michael F. Herbst at 10. November 2017 23:00:00

21. October 2017

sECuREs Webseite

Making GitLab authenticate against dex

Because I found it frustratingly hard to make GitLab and dex talk to each other, this article walks you through what I did step-by-step.

Let’s establish some terminology:

  • dex is our OpenID Connect (OIDC) “Provider (OP)”
    in other words: the component which verifies usernames and passwords.

  • GitLab is our OpenID Connect (OIDC) “Relying Party (RP)”
    in other words: the component where the user actually wants to log in.

Step 1: configure dex

First, I followed dex’s Getting started guide until I had dex serving the example config.

Then, I made the following changes to examples/config-dev.yaml:

  1. Change the issuer URL to be fully qualified and use HTTPS.
  2. Configure the HTTPS listener.
  3. Configure GitLab’s redirect URI.

Here is a diff:

--- /proc/self/fd/11	2017-10-21 15:01:49.005587935 +0200
+++ /tmp/config-dev.yaml	2017-10-21 15:01:47.121632025 +0200
@@ -1,7 +1,7 @@
 # The base path of dex and the external name of the OpenID Connect service.
 # This is the canonical URL that all clients MUST use to refer to dex. If a
 # path is provided, dex's HTTP service will listen at a non-root URL.
-issuer: http://127.0.0.1:5556/dex
+issuer: https://dex.example.net:5554/dex
 
 # The storage configuration determines where dex stores its state. Supported
 # options include SQL flavors and Kubernetes third party resources.
@@ -14,11 +14,9 @@
 
 # Configuration for the HTTP endpoints.
 web:
-  http: 0.0.0.0:5556
-  # Uncomment for HTTPS options.
-  # https: 127.0.0.1:5554
-  # tlsCert: /etc/dex/tls.crt
-  # tlsKey: /etc/dex/tls.key
+  https: dex.example.net:5554
+  tlsCert: /etc/letsencrypt/live/dex.example.net/fullchain.pem
+  tlsKey: /etc/letsencrypt/live/dex.example.net/privkey.pem
 
 # Uncomment this block to enable the gRPC API. This values MUST be different
 # from the HTTP endpoints.
@@ -50,7 +48,7 @@
 staticClients:
 - id: example-app
   redirectURIs:
-  - 'http://127.0.0.1:5555/callback'
+  - 'http://gitlab.example.net/users/auth/mydex/callback'
   name: 'Example App'
   secret: ZXhhbXBsZS1hcHAtc2VjcmV0

Step 2: configure GitLab

First, I followed GitLab Docker images to get GitLab running in Docker.

Then, I swapped out the image with computersciencehouse/gitlab-ce-oidc, which is based on the official image, but adds OpenID Connect support.

I added the following config to /srv/gitlab/config/gitlab.rb:

gitlab_rails['omniauth_enabled'] = true

# Must match the args.name (!) of our configured omniauth provider:
gitlab_rails['omniauth_allow_single_sign_on'] = ['mydex']

# By default, third-party authentication results in a newly created
# user which needs to be unblocked by an admin. Disable this
# additional safety mechanism and directly create users:
gitlab_rails['omniauth_block_auto_created_users'] = false

gitlab_rails['omniauth_providers'] = [
  {
    name: 'openid_connect',  # identifies the omniauth gem to use
    label: 'OIDC',
    args: {
      # The name shows up in the GitLab UI in title-case, i.e. “Mydex”,
      # and must match the name in client_options.redirect_uri below
      # and omniauth_allow_single_sign_on above.
      #
      # NOTE that if you change the name after users have already
      # signed up through the provider, you will need to update the
      # “identities” PostgreSQL table accordingly:
      # echo "UPDATE identities SET provider = 'newdex' WHERE \
      #   provider = 'mydex';" | gitlab-psql gitlabhq_production
      'name':          'mydex',

      # Scope must contain “email”.
      'scope':         ['openid', 'profile', 'email'],

      # Discover all endpoints from the issuer, specifically from
      # https://dex.example.net:5554/dex/.well-known/openid-configuration
      'discovery':     true,

      # Must match the issuer configured in dex:
      # Note that http:// URLs did not work in my tests; use https://
      'issuer':        'https://dex.example.net:5554/dex',

      'client_options': {
        # identifier, secret and redirect_uri must match a
	# configured client in dex.
        'identifier':   'example-app',
        'secret':       'ZXhhbXBsZS1hcHAtc2VjcmV0',
        'redirect_uri': 'https://gitlab.example.net/users/auth/mydex/callback'
      }
    }
  }
]

Step 3: patch omniauth-openid-connect

Until dex issue #376 is fixed, the following patch for the omniauth-openid-connect gem is required:

--- /opt/gitlab/embedded/lib/ruby/gems/2.3.0/gems/omniauth-openid-connect-0.2.3/lib/omniauth/strategies/openid_connect.rb.orig	2017-10-21 12:31:50.777602847 +0000
+++ /opt/gitlab/embedded/lib/ruby/gems/2.3.0/gems/omniauth-openid-connect-0.2.3/lib/omniauth/strategies/openid_connect.rb	2017-10-21 12:34:20.063308560 +0000
@@ -42,24 +42,13 @@
       option :send_nonce, true
       option :client_auth_method
 
-      uid { user_info.sub }
-
+      uid { @email }
       info do
-        {
-          name: user_info.name,
-          email: user_info.email,
-          nickname: user_info.preferred_username,
-          first_name: user_info.given_name,
-          last_name: user_info.family_name,
-          gender: user_info.gender,
-          image: user_info.picture,
-          phone: user_info.phone_number,
-          urls: { website: user_info.website }
-        }
+        { email: @email }
       end
 
       extra do
-        {raw_info: user_info.raw_attributes}
+        {raw_info: {}}
       end
 
       credentials do
@@ -165,6 +154,7 @@
               client_id: client_options.identifier,
               nonce: stored_nonce
           )
+          @email = _id_token.raw_attributes['email']
           _access_token
         }.call()
       end

by Michael Stapelberg at 21. October 2017 13:19:00

20. October 2017

Mero’s Blog

A day in the life of an Omnivore

You get an E-Mail. It's an invite to a team dinner. As you have only recently joined this team, it's going to be your first. How exciting! You look forward to get to know your new coworkers better and socialize outside the office. They are nice and friendly people and you are sure it's going to be a great evening. However, you also have a distinct feeling of anxiety and dread in you. Because you know, the first dinner with new people also means you are going to have the conversation again. You know it will come up, whether you want to or not. Because you had it a thousand times - because you are an Omnivore.

You quickly google the place your manager suggested. "Green's Organic Foodery". They don't have a menu on their site, but the name alone makes clear, that meat probably isn't their specialty, exactly. You consider asking for a change of restaurant, but quickly decide that you don't want to get a reputation as a killjoy who forces their habits on everyone just yet. You figure they are going to have something with meat on their menu. And if not, you can always just grab a burger on your way home or throw a steak into a pan when you get back. You copy the event to your calendar and continue working.

At six, everyone gathers to go to dinner together. It's not far, so you decide to just walk. On the way you get to talk to some of your team mates. You talk about Skiing, your home countries, your previous companies. You are having fun - they seem to be easy-going people, you are getting along well. It's going to be an enjoyable night.

You arrive at the restaurant and get seated. The waiter arrives with the menu and picks up your orders for drinks. When they leave, everyone starts flipping through the menu. “You've got to try their Tofu stir-fry. It's amazing”, Kevin tells you. You nod and smile and turn your attention to the booklet in your hand. You quickly take in the symbols decorating some of the items. “G” - safe to assume, these are the gluten-free options. There's also an “O” on a bunch of them. Also familiar, but ambiguous - could be either “Omnivores" or “Ovo-lacto” (containing at least dairy products or eggs), you've seen both usages. There is no legend to help disambiguate and quickly flipping through the rest of the menu, you find no other symbols. Ovo-lacto, then. You are going to have to guess from the names and short descriptions alone, whether they also contain any meat. They have lasagna, marked with an “O”. Of course that's probably just the cheese but they might make it with actual minced beef.

The waiter returns and takes your orders. “The lasagna - what kind is it?”, you ask. You are trying to avoid the O-word as long as you can. “It's Lasagna alla Bolognese, house style”. Uh-oh. House style? “Is it made from cattle, or based on gluten proteins?” (you don't care how awkward you have to phrase your question, you are not saying the magic trigger words!) “Uhm… I'm not sure. I can ask in the kitchen, if you'd like?” “That would be great, thanks”. They leave. Jen, from across the table, smiles at you - are you imagining it, or does it look slightly awkward? You know the next question. “Are you an Omnivore?” “I eat meat, yes”, you say, smiling politely. Frick. Just one meal, is all you wanted. But it had to come up at some point anyway, so fine. “Wow. I'm Ovo-lacto myself. But I couldn't always eat meat, I think. Is it hard?” You notice that your respective seat neighbors have started to listen too. Ovo-lactos aren't a rarity anymore (Onmivores a bit more so, but not that much), but the topic still seems interesting enough to catch attention. You've seen it before. What feels like a hundred thousand times. In fact, you have said exactly the same thing Jen just said, just a year or so ago. Before you just decided to go Omnivore.

“Not really”, you start. “I found it not much harder than when I went Ovo-lacto. You have to get used to it, of course, and pay a little more attention, but usually you find something. Just like when you start eating cheese and eggs.” At that moment, the waiter returns. “I spoke to the chef and we can make the lasagna with beef, if you like”. “Yes please”, you hand the menu back to them with a smile. “I considered going Ovo-lacto”, Mike continues the conversation from the seat next to Jen, “but for now I just try to have some milk every once in a while. Like, in my coffee or cereal. It's not that I don't like it, there are really great dairy products. For example, this place in the city center makes this amazing yogurt. But having it every day just seems very hard“. “Sure”, you simply say. You know they mean well and you don't want to offend them by not being interested; but you also heard these exact literal words at least two dozen times. And always with ample evidence in the room that it's not actually that hard.

“I don't really see the point”, Roy interjects from the end of the table. You make a mental check-mark. “Shouldn't people just eat what they want? I don't get why suddenly we all have to like meat”. It doesn't matter that no one suggested that. “I mean, I do think it's cool if people eat meat”, someone whose name you don't remember adds, “I sometimes think I should eat more eggs myself. But it's just so annoying that you get these Omnivores who try to lecture you about how unhealthy it is to not eat meat or that humans are naturally predisposed to digest meat. I mean, you seem really relaxed about it”, they quickly add as assurance in your direction, “but you can't deny that there are also these Omni-nazis”. You sip on your water, mentally counting backwards from 3. “You know how you find an Omnivore at a party?”, Kevin asks jokingly from your right. “Don't worry, they will tell you”, Rick delivers the punchline for him. How original.

”I totally get Omnivores. If they want to eat meat, that's great. What I don't understand is this weird trend of fake-salad. Like, people get a salad, but then they put french dressing on it, or bacon bits. I mean, if you want a salad, why not just simply have a salad?”. You know the stupidly obvious answer of course and you haven't talked in a while, so you decide to steer the conversation into a more pleasant direction. “It's simple, really. You like salad, right?” “Yeah, of course“ “So, if you like salad, but decide that you also want to eat dairy or meat - doesn't it make sense to get as close to a pure salad as you can? While still staying with your conviction to eat meat? It's a tradeoff, sure, but isn't it better than no salad at all?” There's a brief pause. You can tell that they haven't considered that before. No one has. Which you find baffling. Every single time. “Hm. I guess I haven't thought about it like that before. From that point of view it does kind of make sense. Anyway, I still prefer the real deal”. “That's fine”, you say, smiling “I will continue to eat my salad with french dressing”.

Your food arrives and the conversation continues for a bit, with the usual follow-up questions - do you eat pork too, or just beef? What about dogs? Would you consider eating monkey meat? Or Human? You explain that you don't worry about the exact line, that you are not dogmatic about it and usually just decide based on convenience and what seems obvious (and in luckily, these questions don't usually need an answer in practice anyway). Someone brings up how some of what's labeled as milk actually is made from almonds, because it's cheaper, so you can't even be sure you actually get dairy. But slowly, person by person, the topic shifts back to work, hobbies and family. “How's the lasagna?”, Jen asks. “Great”, you reply with a smile, because it is.

On your way home, you take stock. Overall, the evening went pretty well. You got along great with most of your coworkers and had long, fun conversations. The food ended up delicious, even if you wish they had just properly labeled their menu. You probably are going to have to nudge your team on future occasions, so you go out to Omnivore-friendlier places. But you are also pretty sure they are open to it. Who knows, you might even get them to go to a steak house at some point. You know you are inevitably going to have the conversation again, at some point - whether it will come up at another meal with your team or with a new person, who you eat with for the first time. This time, at least, it went reasonably well.


This post is a work of fiction. ;) Names, characters, places and incidents either are products of the author's imagination or are used fictitiously. Any resemblance to actual events or locales or persons, living or dead, is entirely coincidental.

Also, if we had "the conversation" before, you should know I still love you and don't judge you :) It's just that I had it a thousand times :)

20. October 2017 23:45:00

18. September 2017

michael-herbst.com

Coulomb-Sturmians and molsturm

Yesterday I gave a talk at our annual group retreat at the Darmstädter Haus in Hirschegg, Kleinwalsertal. For this talk I expanded the slides from my lazy matrix talk in Kiel, incorporating a few of the more recent Coulomb-Sturmian results I obtained.

Most importantly I added a few slides to highlight the issues with standard contracted Gaussians and to discuss the fundamental properties of the Coulomb-Sturmians. Furthermore I gave some details why a contraction-based scheme is advantageous if Coulomb-Sturmian basis functions are employed and discussed the design of our molsturm program package and how it allows to perform calculations in a fully contraction-based manor.

Link Licence
Coulomb-Sturmians and molsturm (Slides Kleinwalsertal talk) Creative Commons License

by Michael F. Herbst at 18. September 2017 22:00:00

12. September 2017

Mero’s Blog

Diminishing returns of static typing

I often get into discussions with people, where the matter of strictness and expressiveness of a static type system comes up. The most common one, by far, is Go's lack of generics and the resulting necessity to use interface{} in container types (the container-subpackages are obvious cases, but also context). When I express my view, that the lack of static type-safety for containers isn't a problem, I am treated with condescending reactions ranging from disbelief to patronizing.

I also often take the other side of the argument. This happens commonly, when talking to proponents of dynamically typed languages. In particular I got into debates of whether Python would be suitable for a certain use-case. When the lack of static type-safety is brought up, the proponents of Python defend it by pointing out that it now features optional type hints. Which they say make it possible, to reap the benefits of static typing even in a conventionally dynamically typed language.

This is an attempt to write my thoughts on both of these (though they are not in any way novel or creative) down more thoroughly. Discussions usually don't provide the space for that. They are also often charged and parties are more interested in “winning the argument”, than finding consensus.


I don't think it's particularly controversial, that static typing in general has advantages, even though actual data about those seems to be surprisingly hard to come by. I certainly believe that, it is why I use Go in the first place. There is a difference of opinion though, in how large and important those benefits are and how much of the behavior of a program must be statically checked to reap those benefits.

To understand this, we should first make explicit what the benefits of static type checking are. The most commonly mentioned one is to catch bugs as early in the development process as possible. If a piece of code I write already contains a rigorous proof of correctness in the form of types, just writing it down and compiling it gives me assurance that it will work as intended in all circumstances. At the other end of the spectrum, in a fully dynamic language I will need to write tests exercising all of my code to find bugs. Running tests takes time. Writing good tests that actually cover all intended behavior is hard. And as it's in general impossible to cover all possible execution paths, there will always be the possibility of a rare edge-case that we didn't think of testing to trigger a bug in production.

So, we can think of static typing as increasing the proportion of bug-free lines of code deployed to production. This is of course a simplification. In practice, we would still catch a lot of the bugs via more rigorous testing, QA, canarying and other practices. To a degree we can still subsume these in this simplification though. If we catch a buggy line of code in QA or the canary phase, we are going to roll it back. So in a sense, the proportion of code we wrote that makes it as bug-free into production will still go down. Thus:

This is usually the understanding, that the “more static typing is always better” argument is based on. Checking more behavior at compile time means less bugs in production means more satisfied customers and less being woken up at night by your pager. Everybody's happy.

Why then is it, that we don't all code in Idris, Agda or a similarly strict language? Sure, the graph above is suggestively drawn to taper off, but it's still monotonically increasing. You'd think that this implies more is better. The answer, of course, is that static typing has a cost and that there is no free lunch.

The costs of static typing again come in many forms. It requires more upfront investment in thinking about the correct types. It increases compile times and thus the change-compile-test-repeat cycle. It makes for a steeper learning curve. And more often than we like to admit, the error messages a compiler will give us will decline in usefulness as the power of a type system increases. Again, we can oversimplify and subsume these effects in saying that it reduces our speed:

This is what we mean when we talk about dynamically typed languages being good for rapid prototyping. In the end, however, what we are usually interested in, is what I'd like to call velocity: The speed with which we can deploy new features to our users. We can model that as the speed with which we can roll out bug-free code. Graphically, that is expressed as the product of the previous two graphs:

In practice, the product of these two functions will have a maximum, a sweet spot of maximum velocity. Designing a type system for a programming language is, at least in part, about finding that sweet spot¹.

Now if we are to accept all of this, that opens up a different question: If we are indeed searching for that sweet spot, how do we explain the vast differences in strength of type systems that we use in practice? The answer of course is simple (and I'm sure many of you have already typed it up in an angry response). The curves I drew above are completely made up. Given how hard it is to do empirical research in this space and to actually quantify the measures I used here, it stands to reason that their shape is very much up for interpretation.

A Python developer might very reasonably believe that optional type-annotations are more than enough to achieve most if not all the advantages of static typing. While a Haskell developer might be much better adapted to static typing and not be slowed down by it as much (or even at all). As a result, the perceived sweet spot can vary widely:

What's more, the importance of these factors might vary a lot too. If you are writing avionics code or are programming the control unit for a space craft, you probably want to be pretty darn sure that the code you are deploying is correct. On the other hand, if you are a Silicon Valley startup in your growth-phase, user acquisition will be of a very high priority and you get users by deploying features quicker than your competitors. We can model that, by weighing the factors differently:

Your use case will determine the sweet spot you are looking for and thus the language you will choose. But a language is also designed with a set of use cases in mind and will set its own sweet spot according to that.

I think when we talk about how strict a type system should be, we need to acknowledge these subjective factors. And it is fine to believe that your perception of one of those curves or how they should be weighted is closer to a hypothetical objective reality than another persons. But you should make that belief explicit and provide a justification of why your perception is more realistic. Don't just assume that other people view them the same way and then be confused that they do not come to the same conclusions as you.


Back to Go's type system. In my opinion, Go manages to hit a good sweet spot (that is, its design agrees with my personal preferences on this). To me it seems that Go reaps probably upwards of 90% of the benefits you can get from static typing while still being not too impeding. And while I definitely agree static typing is beneficial, the marginal benefit of making user-defined containers type-safe simply seems pretty low (even if it's positive). In the end, it would probably be less than 1% of Go code that would get this additional type-checking and it is probably pretty obvious code. And meanwhile, I perceive generics as a language feature pretty costly. So I find it hard to justify a large perceived cost with a small perceived benefit.

Now, that is not to say I'm not open to be convinced. Just that simply saying “but more type-safety!” is only looking at one side of the equation and isn't enough. You need to acknowledge that there is no free lunch and that this is a tradeoff. You need to accept that your perceptions of how big the benefit of adding static typing is, how much it costs and how important it is are all subjective. If you want to convince me that my perception of their benefit is wrong, the best way would be to provide specific instances of bugs or production crashes caused by a type-assertion on an interface{} taken out of a container. Or a refactoring you couldn't make because of the lack of type-safety with a specific container. Ideally, this takes the form of an experience report, which I consider an excellent way to talk about engineered tradeoffs.

Of course you can continue to roll your eyes whenever someone questions your perception of the value-curve of static typing. Or pretend that when I say the marginal benefit of type-safe containers is small, I am implying that the total benefit of static typing is small. It's an effective debate-tactic, if your goal is to shut up your opposition. But not if your goal is to convince them and build consensus.


[1] There is a generous and broad exception for research languages here. If the point of your design is to explore the possibility space of type-systems, matters of practicality can of course often be ignored.

12. September 2017 11:05:00

11. September 2017

michael-herbst.com

Advanced bash scripting 2017

I am very happy to announce that my graduate school asked me to repeat the block course about bash scripting, which I first taught in 2015.

In the course we will take a structured look at UNIX shell scripting from the bottom up. We will revise some elements about the UNIX operating system infrastructure, discuss handy features of the shell as well as its syntax elements. Whilst the main focus of the course is the bash shell, we will look at other common utility programs like grep, sed and awk as well as they are very handy in the context of scripting. Last but not least we will discuss common pitfalls and how they can be avoided.

The course runs from 6th till 10th November 2017 at Heidelberg University. You can find further information on the "Advanced bash scripting" course website.

As usual all course material will be published both on the course website as well as the course github repository afterwards.

Update (29/09/2017): Registration is now open.

by Michael F. Herbst at 11. September 2017 08:00:00

05. September 2017

Mero’s Blog

Gendered Marbles

tl;dr: "Some marbles, apparently, have a gender. And they seem to be overwhelmingly male."

A couple of days ago The MarbleLympics 2017 popped into my twitter stream. In case you are unaware (I certainly was): It's a series of videos where a bunch of marbles participate in a made-up Olympics. They are split into teams that then participate in a series of "competitions" in a variety of different events. The whole event is professionally filmed, cut and overlaid both with fake noises from spectators and a well-made, engaging sports commentary. It is really fun to watch. I don't know why, but I find it way more captivating than watching actual sports. I thoroughly recommend it.

Around event 8 (high jump) though, I suddenly noticed that the commentator would occasionally not only personify but actually gender marbles. For most of the commentary he just refers to the teams as a whole with a generic "they". But every once in a while - and especially often during the high-jump event - he would use a singular gendered pronoun. Also, that only really occurred to me when he referred to one of the marbles as "she".

This instantly became one of those things that after noticing it, I couldn't unnotice it. It's not so much that it matters. But from then on, I couldn't stop listening up every time a singular pronoun was used.

Well, you know where this is going. Fully aware of how much of a waste of my time this is, I sat down and counted. More specifically, I downloaded the closed captions of all the videos and grepped through them for pronouns. I did double-check all findings and here is what I found: By my count, 20 distinct marbles are referred to by singular pronouns (yes. I noted their names to filter duplicates. Also I kind of hoped to find a genderfluid marble to be honest). Here is an alphabetized list of gendered marbles:

  • Aqua (Oceanics) - Male
  • Argent (Quicksilvers) - Male
  • Clementin (O'Rangers) - Male
  • Cocoa (Chocolatiers) - Male (in two events)
  • Cosmo (Team Galactic) - Male
  • Imar (Primaries) - Male
  • Jump (Jungle Jumpers) - Male
  • Leap (Jungle Jumpers) - Male
  • Mandarin (O'Rangers) - Male (in two events)
  • Mary (Team Primary) - Female
  • Mercurial (Quicksilvers) - Male
  • Mimo (Team Momo) - Male
  • Momo Momo (Team Momo) - Male (in three events)
  • Pinky Winky (Pinkies) - Male
  • Rapidly (Savage Speeders) - Male
  • Taffy (Jawbreakers) - Male
  • Wespy (Midnight Wisps) - Male
  • Whizzy (Savage Speeders) - Male
  • Yellah (Mellow Yellow) - Male
  • Yellup (Mellow Yellow) - Male

As you can see, the overwhelming majority of gendered marbles are men. There is exactly one exception: Mary. From what I can tell, that's because it's the only name that has clear gender associations. All the other names probably could go either way. And marbles obviously have no gender. They are as non-gendered an object as you could imagine. And yet there seems to be a default assumption that athletic marbles would be men.

Obviously this doesn't matter. Obviously you can't discriminate marbles. You can't misgender them or hurt their feelings. Obviously the commentator didn't sit down and made a list of all the marbles and assigned 95% of them a male gender - it's clearly just an ad-hoc subconscious assignment. And to be absolutely clear: I do not try to fault the makers of these videos at all. They did nothing wrong. It's a ludicrous expectation for them to sit down and make sure that they assign balanced genders to their marbles.

But I do find it an interesting observation. I do think it reflects an implicit, unconscious bias in a striking way. I also think it illustrates nicely that gender bias in language isn't exclusive to languages like German, where all nouns are gendered (take note, German friends). Of course none of this is news. This kind of unconscious gender bias in language is well-researched and documented. It's just that once you know about it, you can't stop noticing the evidence for it popping up everywhere. Even with marbles.

And all of that being said: Yes, I am also aware that all of this is slightly ridiculous.


PS: In case the team behind the MarbleLympics are reading this: Really, thank you for the videos :) They are great.

05. September 2017 23:22:00

19. August 2017

sECuREs Webseite

Why Go is my favorite programming language

I strive to respect everybody’s personal preferences, so I usually steer clear of debates about which is the best programming language, text editor or operating system.

However, recently I was asked a couple of times why I like and use a lot of Go, so here is a coherent article to fill in the blanks of my ad-hoc in-person ramblings :-).

My background

I have used C and Perl for a number of decently sized projects. I have written programs in Python, Ruby, C++, CHICKEN Scheme, Emacs Lisp, Rust and Java (for Android only). I understand a bit of Lua, PHP, Erlang and Haskell. In a previous life, I developed a number of programs using Delphi.

I had a brief look at Go in 2009, when it was first released. I seriously started using the language when Go 1.0 was released in 2012, featuring the Go 1 compatibility guarantee. I still have code running in production which I authored in 2012, largely untouched.

1. Clarity

Formatting

Go code, by convention, is formatted using the gofmt tool. Programmatically formatting code is not a new idea, but contrary to its predecessors, gofmt supports precisely one canonical style.

Having all code formatted the same way makes reading code easier; the code feels familiar. This helps not only when reading the standard library or Go compiler, but also when working with many code bases — think Open Source, or big companies.

Further, auto-formatting is a huge time-saver during code reviews, as it eliminates an entire dimension in which code could be reviewed before: now, you can just let your continuous integration system verify that gofmt produces no diffs.

Interestingly enough, having my editor apply gofmt when saving a file has changed the way I write code. I used to attempt to match what the formatter would enforce, then have it correct my mistakes. Nowadays, I express my thought as quickly as possible and trust gofmt to make it pretty (example of what I would type, click Format).

High-quality code

I use the standard library (docs, source) quite a bit, see below.

All standard library code which I have read so far was of extremely high quality.

One example is the image/jpeg package: I didn’t know how JPEG worked at the time, but it was easy to pick up by switching between the Wikipedia JPEG article and the image/jpeg code. If the package had a few more comments, I would qualify it as a teaching implementation.

Opinions

I have come to agree with many opinions the Go community holds, such as:

Few keywords and abstraction layers

The Go specification lists only 25 keywords, which I can easily keep in my head.

The same is true for builtin functions and types.

In my experience, the small number of abstraction layers and concepts makes the language easy to pick up and quickly feel comfortable in.

While we’re talking about it: I was surprised about how readable the Go specification is. It really seems to target programmers (rather than standards committees?).

2. Speed

Quick feedback / low latency

I love quick feedback: I appreciate websites which load quickly, I prefer fluent User Interfaces which don’t lag, and I will choose a quick tool over a more powerful tool any day. The findings of large web properties confirm that this behavior is shared by many.

The authors of the Go compiler respect my desire for low latency: compilation speed matters to them, and new optimizations are carefully weighed against whether they will slow down compilation.

A friend of mine had not used Go before. After installing the RobustIRC bridge using go get, they concluded that Go must be an interpreted language and I had to correct them: no, the Go compiler just is that fast.

Most Go tools are no exception, e.g. gofmt or goimports are blazingly fast.

Maximum resource usage

For batch applications (as opposed to interactive applications), utilizing the available resources to their fullest is usually more important than low latency.

It is delightfully easy to profile and change a Go program to utilize all available IOPS, network bandwidth or compute. As an example, I wrote about filling a 1 Gbps link, and optimized debiman to utilize all available resources, reducing its runtime by hours.

3. Rich standard library

The Go standard library provides means to effectively use common communications protocols and data storage formats/mechanisms, such as TCP/IP, HTTP, JPEG, SQL, …

Go’s standard library is the best one I have ever seen. I perceive it as well-organized, clean, small, yet comprehensive: I often find it possible to write reasonably sized programs with just the standard library, plus one or two external packages.

Domain-specific data types and algorithms are (in general) not included and live outside the standard library, e.g. golang.org/x/net/html. The golang.org/x namespace also serves as a staging area for new code before it enters the standard library: the Go 1 compatibility guarantee precludes any breaking changes, even if they are clearly worthwhile. A prominent example is golang.org/x/crypto/ssh, which had to break existing code to establish a more secure default.

4. Tooling

To download, compile, install and update Go packages, I use the go get tool.

All Go code bases I have worked with use the built-in testing facilities. This results not only in easy and fast testing, but also in coverage reports being readily available.

Whenever a program uses more resources than expected, I fire up pprof. See this golang.org blog post about pprof for an introduction, or my blog post about optimizing Debian Code Search. After importing the net/http/pprof package, you can profile your server while it’s running, without recompilation or restarting.

Cross-compilation is as easy as setting the GOARCH environment variable, e.g. GOARCH=arm64 for targeting the Raspberry Pi 3. Notably, tools just work cross-platform, too! For example, I can profile gokrazy from my amd64 computer: go tool pprof ~/go/bin/linux_arm64/dhcp http://gokrazy:3112/debug/pprof/heap.

godoc displays documentation as plain text or serves it via HTTP. godoc.org is a public instance, but I run a local one to use while offline or for not yet published packages.

Note that these are standard tools coming with the language. Coming from C, each of the above would be a significant feat to accomplish. In Go, we take them for granted.

Getting started

Hopefully I was able to convey why I’m happy working with Go.

If you’re interested in getting started with Go, check out the beginner’s resources we point people to when they join the Gophers slack channel. See https://golang.org/help/.

Caveats

Of course, no programming tool is entirely free of problems. Given that this article explains why Go is my favorite programming language, it focuses on the positives. I will mention a few issues in passing, though:

  • If you use Go packages which don’t offer a stable API, you might want to use a specific, known-working version. Your best bet is the dep tool, which is not part of the language at the time of writing.
  • Idiomatic Go code does not necessarily translate to the highest performance machine code, and the runtime comes at a (small) cost. In the rare cases where I found performance lacking, I successfully resorted to cgo or assembler. If your domain is hard-realtime applications or otherwise extremely performance-critical code, your mileage may vary, though.
  • I wrote that the Go standard library is the best I have ever seen, but that doesn’t mean it doesn’t have any problems. One example is complicated handling of comments when modifying Go code programmatically via one of the standard library’s oldest packages, go/ast.

by Michael Stapelberg at 19. August 2017 11:00:00

18. August 2017

RaumZeitLabor

Analogspieleabend im RaumZeitLabor

Am 30.09.2017 findet der erste Analogspieleabend im RaumZeitLabor statt. Von Brettspiele über Kartenspiele bis hin zu Rätselspielen und sonstigem hätten wir gerne alles dabei. Deshalb bringt auch eure eigenen Spiele mit! Wir werden dann gemeinsam in netter Runde, zu Mate und Tiefkühlpizza und anderen traditionell nerdigen Speisen, Spiele spielen, wie es im 20. Jahrhundert noch tradition war. Die Veranstaltung beginnt um 17 Uhr. Der Eintritt ist selbstverständlich frei, wir würden uns aber über Spenden freuen. Vorbei ist es, wenn der letzte geht. Ihr wart noch nie im RaumZeitLabor? Schaut euch [Anfahrtbeschreibung](https://raumzeitlabor.de/kontakt/anfahrt/) an und das wichtigste: Schreibt euch die Nummer vom Raum (+49 621 76 23 13 70) auf, damit wir euch helfen können, solltet ihr den Weg nicht finden.

by uwap at 18. August 2017 00:00:00

14. August 2017

Mero’s Blog

Why context.Value matters and how to improve it

tl;dr: I think context.Value solves the important use case of writing stateless - and thus scalable - abstractions. I believe dynamic scoping could provide the same benefits while solving most of the criticism of the current implementation. I thus try to steer the discussion away from the concrete implementation and towards the underlying problem.

This blog post is relatively long. I encourage you to skip sections you find boring


Lately this blogpost has been discussed in several Go forums. It brings up several good arguments against the context-package:

  • It requires every intermediate functions to include a context.Context even if they themselves do not use it. This introduces clutter into APIs and requires extensive plumbing. Additionally, ctx context.Context "stutters".
  • context.Value is not statically type-safe, requiring type-assertions.
  • It does not allow you to express critical dependencies on context-contents statically.
  • It's susceptible to name collisions due to requiring a global namespace.
  • It's a map implemented as a linked list and thus inefficient.

However, I don't think the post is doing a good enough job to discuss the problems context was designed to solve. It explicitly focuses on cancellation. Context.Value is discarded by simply stating that

[…] designing your APIs without ctx.Value in mind at all makes it always possible to come up with alternatives.

I think this is not doing this question justice. To have a reasoned argument about context.Value there need to be consideration for both sides involved. No matter what your opinion on the current API is: The fact that seasoned, intelligent engineers felt the need - after significant thought - for Context.Value should already imply that the question deserves more attention.

I'm going to try to describe my view on what kind of problems the context package tries to address, what alternatives currently exist and why I find them insufficient and I'm trying to describe an alternative design for a future evolution of the language. It would solve the same problems while avoiding some of the learned downsides of the context package. It is not meant as a specific proposal for Go 2 (I consider that way premature at this point) but just to show that a balanced view can show up alternatives in the design space and make it easier to consider all options.


The problem context sets out to solve is one of abstracting a problem into independently executing units handled by different parts of a system. And how to scope data to one of these units in this scenario. It's hard to clearly define the abstraction I am talking about. So I'm instead going to give some examples.

  • When you build a scalable web service you will probably have a stateless frontend server that does things like authentication, verification and parsing for you. This allows you to scale up the external interface effortlessly and thus also gracefully fall back if the load increases past what the backends can handle. By treating requests as independent from each other you can load-balance them freely between your frontends.
  • Microservices split a large application into small individual pieces that each process individual requests, each potentially branching out into more requests to other services. The requests will usually be independent, making it easy to scale individual microservices up and down based on demand, to load-balance between instances and to solve problems in transparent proxies.
  • Functions as a Service goes one step further: You write single stateless functions that transform data and the platform will make them scale and execute efficiently.
  • Even CSP, the concurrency model built into Go, can be viewed through that lens. The programmer expresses her problem as individually executing "processes" and the runtime will execute them efficiently.
  • Functional Programming as a paradigm calls this "purity". The concept that a functions result may only depend on its input parameters means not much more than the absence of shared state and independent execution.
  • The design of a Request Oriented Collector for Go plays exactly into the same assumptions and ideas.

The idea in all these cases is to increase scaling (whether distributed among machines, between threads or just in code) by reducing shared state while maintaining shared usage of resources.

Go takes a measured approach to this. It doesn't go as far as some functional programming languages to forbid or discourage mutable state. It allows sharing memory between threads and synchronizing with mutexes instead of relying purely on channels. But it also definitely tries to be a (if not the) language to write modern, scalable services in. As such, it needs to be a good language to write this kind of stateless services. It needs to be able to make requests the level of isolation instead of the process. At least to a degree.

(Side note: This seems to play into the statement of the author of above article, who claims that context is mainly useful for server authors. I disagree though. The general abstraction happens on many levels. E.g. a click in a GUI counts just as much as a "request" for this abstraction as an HTTP request)

This brings with it the requirement of being able to store some data on a request-level. A simple example for this would be authentication in an RPC framework. Different requests will have different capabilities. If a request originates from an administrator it should have higher privileges than if it originates from an unauthenticated user. This is fundamentally request scoped data. Not process, service or application scoped. And the RPC framework should treat this data as opaque. It is application specific not only how that data looks en détail but also what kinds of data it requires.

Just like an HTTP proxy or framework should not need to know about request parameters or headers it doesn't consume, an RPC framework shouldn't know about request scoped data the application needs.


Let's try to look at specific ways this problem is (or could be) solved without involving context. As an example, let's look at the problem of writing an HTTP middleware. We want to be able to wrap an http.Handler (or a variation thereof) in a way that allows the wrapper to attach data to a request.

To get static type-safety we could try to add some type to our handlers. We could have a type containing all the data we want to keep request scoped and pass that through our handlers:

type Data struct {
    Username string
    Log *log.Logger
    // …
}

func HandleA(d Data, res http.ResponseWriter, req *http.Request) {
    // …
    d.Username = "admin"
    HandleB(d, req, res)
    // …
}

func HandleB(d Data, res http.ResponseWriter, req *http.Request) {
    // …
}

However, this would prevent us from writing reusable Middleware. Any such middleware would need to make it possible to wrap HandleA. But as it's supposed to be reusable, it can't know the type of the Data parameter. You could make the Data parameter an interface{} and require type-assertion. But that wouldn't allow the middleware to inject its own data. You might think that interface type-assertions could solve this, but they have their own set of problems. In the end, this approach won't bring you actual additional type safety.

We could store our state keyed by requests. For example, an authentication middleware could do

type Authenticator struct {
    mu sync.Mutex
    users map[*http.Request]string
    wrapped http.Handler
}

func (a *Authenticator) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    // …
    a.mu.Lock()
    a.users[req] = "admin"
    a.mu.Unlock()
    defer func() {
        a.mu.Lock()
        delete(a.users, req)
        a.mu.Unlock()
    }()
    a.wrapped.ServeHTTP(res, req)
}

func (a *Authenticator) Username(req *http.Request) string {
    a.mu.Lock()
    defer a.mu.Unlock()
    return a.users[req]
}

This has some advantages over context:

  • It is more type-safe.
  • While we still can't express a requirement on an authenticated user statically, we can express a requirement on an Authenticator
  • It's not susceptible to name-collisions anymore.

However, we bought this with shared mutable state and the associated lock contention. It can also break in subtle ways, if one of the intermediate handlers decides to create a new Request - as http.StripPrefix is going to do soon.

Lastly, we might consider to store this data in the *http.Request itself, for example by adding it as a stringified URL parameter. This too has several downsides, though. In fact it checks almost every single item from our list of downsides of context.Context. The exception is being a linked list. But even that advantage we buy with a lack of thread safety. If that request is passed to a handler in a different goroutine we get into trouble.

(Side note: All of this also gives us a good idea of why the context package is implemented as a linked list. It allows all the data stored in it to be read-only and thus inherently thread-safe. There will never be any lock-contention around the shared state saved in a context.Context, because there will never be any need for locks)

So we see that it is really hard (if not impossible) to solve this problem of having data attached to requests in independently executing handlers while also doing significantly better than with context.Value. Whether you believe this a problem worth solving or not is debatable. But if you want to get this kind of scalable abstraction you will have to rely on something like context.Value.


No matter whether you are now convinced of the usefulness of context.Value or still doubtful: The disadvantages can clearly not be ignored in either case. But we can try to find a way to improve on it. To eliminate some of the disadvantages while still keeping its useful attributes.

One way to do that (in Go 2) would be to introduce dynamically scoped variables. Semantically, each dynamically scoped variable represents a separate stack. Every time you change its value the new one is pushed to the stack. It is pop'ed off again after your function returns. For example:

// Let's make up syntax! Only a tiny bit, though.
dyn x = 23

func Foo() {
    fmt.Println("Foo:", x)
}

func Bar() {
    fmt.Println("Bar:", x)
    x = 42
    fmt.Println("Bar:", x)
    Baz()
    fmt.Println("Bar:", x)
}

func Baz() {
    fmt.Println("Baz:", x)
    x = 1337
    fmt.Println("Baz:", x)
}

func main() {
    fmt.Println("main:", x)
    Foo()
    Bar()
    Baz()
    fmt.Println("main:", x)
}

// Output:
main: 23
Foo: 23
Bar: 23
Bar: 42
Baz: 42
Baz: 1337
Bar: 42
Baz: 23
Baz: 1337
main: 23

There are several notes about what I would imagine the semantics to be here.

  • I would only allow dyn-declarations at package scope. Given that there is no way to refer to a local identifier of a different function, that seems logical.
  • A newly spawned goroutine would inherit the dynamic values of its parent function. If we implement them (like context.Context) via linked lists, the shared data will be read-only. The head-pointer would need to be stored in some kind of goroutine-local storage. Thus, writes only ever modify this local storage (and the global heap), so wouldn't need to be synchronized specifically.
  • The dynamic scoping would be independent of the package the variable is declared in. That is, if foo.A modifies a dynamic bar.X, then that modification is visible to all subsequent callees of foo.A, whether they are in bar or not.
  • Dynamically scoped variables would likely not be addressable. Otherwise we'd loose concurrency safety and the clear "down-stack" semantics of dynamic scoping. It would still be possible to declare dyn x *int though and thus get mutable state to pass on.
  • The compiler would allocate the necessary storage for the stacks, initialized to their initializers and emit the necessary instructions to push and pop values on writes and returns. To account for panics and early returns, a mechanism like defer would be needed.
  • There is some confusing overlap with package-scoped variables in this design. Most notably, from seeing foo.X = Y you wouldn't be able to tell whether foo.X is dynamically scoped or not. Personally, I would address that by removing package-scoped variables from the language. They could still be emulated by declaring a dynamically-scoped pointer and never modifying it. Its pointee is then a shared variable. But most usages of package-scoped variables would probably just use dynamically scoped variables.

It is instructive to compare this design against the list of disadvantages identified for context.

  • API clutter would be removed, as request-scoped data would now be part of the language without needing explicit passing.
  • Dynamically scoped variables are statically type-safe. Every dyn declaration has an unambiguous type.
  • It would still not be possible to express critical dependencies on dynamically scoped variables but they also couldn't be absent. At worst they'll have their zero value.
  • Name collision is eliminated. Identifiers are, just like variable names, properly scoped.
  • While a naive implementation would still use linked lists, they wouldn't be inefficient. Every dyn declaration gets its own list and only the head-pointer ever needs to be operated on.
  • The design is still "magic" to a degree. But that "magic" is problem-inherent (at least if I understand the criticism correctly). The magic is exactly the possibility to pass values transparently through API boundaries.

Lastly, I'd like to mention cancellation. While the author of above post dedicates most of it to cancellation, I have so far mostly ignored it. That's because I believe cancellation to be trivially implementable on top of a good context.Value implementation. For example:

// $GOROOT/src/done
package done

// C is closed when the current execution context (e.g. request) should be
// cancelled.
dyn C <-chan struct{}

// CancelFunc returns a channel that gets closed, when C is closed or cancel is
// called.
func CancelFunc() (c <-chan struct, cancel func()) {
    // Note: We can't modify C here, because it is dynamically scoped, which is
    // why we return a new channel that the caller should store.
    ch := make(chan struct)

    var o sync.Once
    cancel = func() { o.Do(close(ch)) }
    if C != nil {
        go func() {
            <-C
            cancel()
        }()
    }
    return ch, cancel
}

// $GOPATH/example.com/foo
package foo

func Foo() {
    var cancel func()
    done.C, cancel = done.CancelFunc()
    defer cancel()
    // Do things
}

This cancellation mechanism would now be usable from any library that wants it without needing any explicit support in its API. This would also make it easy to add cancellation capabilities retroactively.


Whether you like this design or not, it demonstrates that we shouldn't rush to calling for the removal of context. Removing it is only one possible solution to its downsides.

If the removal of context.Context actually comes up, the question we should ask is "do we want a canonical way to manage request-scoped values and at what cost". Only then should we ask what the best implementation of this would be or whether to remove the current one.

14. August 2017 00:17:25

06. August 2017

Mero’s Blog

What I want from a logging API

This is intended as an Experience Report about logging in Go. There are many like it but this one is mine.

I have been trying for a while now to find (or build) a logging API in Go that fills my needs. There are several things that make this hard to get "right" though. This is my attempt to describe them coherently in one place.

When I say "logging", I mean informational text messages for human consumption used to debug a specific problem. There is an idea currently gaining traction in the Go community called "structured logging". logrus is a popular package that implements this idea. If you haven't heard of it, you might want to skim its README. And while I definitely agree that log-messages should contain some structural information that is useful for later filtering (like the current time or a request ID), I believe the idea as often advocated is somewhat misguided and conflates different use cases that are better addressed otherwise. For example, if you are tempted to add a structured field to your log containing an HTTP response code to alert on too many errors, you probably want to use metrics and timeseries instead. If you want to follow a field through a variety of systems, you probably want to annotate a trace. If you want analytics like calculating daily active users or what used user-agents are used how often, you probably want what I like to call request annotations, as these are properties of a request, not of a log-line. If you exclude all these use cases, there isn't a lot left for structured logging to address.

The logs I am talking about is to give a user or the operator of a software more insight into what is going on under the covers. The default assumption is, that they are not looked at until something goes wrong: Be it a test failing, an alerting system notifying of an issue or a bug report being investigated or a CLI not doing what the user expected. As such it is important that they are verbose to a certain degree. As an operator, I don't want to find out that I can't troubleshoot a problem because someone did not log a critical piece of information. An API that requires (or encourages) me to only log structured data will ultimately only discourage me from logging at all. In the end, some form of log.Debugf("Error reading foo: %v", err) is the perfect API for my use case. Any structured information needed to make this call practically useful should be part of the setup phase of whatever log is.

The next somewhat contentious question is whether or not the API should support log levels (and if so, which). My personal short answer is "yes and the log levels should be Error, Info and Debug". I could try and justify these specific choices but I don't think that really helps; chalk it up to personal preference if you like. I believe having some variation on the verbosity of logs is very important. A CLI should be quiet by default but be able to tell the user more specifically where things went wrong on request. A service should be debuggable in depth, but unconditionally logging verbosely would have in unacceptable latency impact in production and too heavy storage costs. There need to be some logs by default though, to get quick insights during an emergency or in retrospect. So, those three levels seem fine to me.

Lastly what I need from a logging API, is the possibility to set up verbosity and log sinks both horizontally and vertically. What I mean by that is that software is usually build in layers. They could be individual microservices, Go packages or types. Requests will then traverse these layers vertically, possibly branching out and interleaved to various degrees.

Request forest

Depending on what and how I am debugging, it makes sense to increase the log verbosity of a particular layer (say I narrowed down the problem to shared state in a particular handler and want to see what happens to that state during multiple requests) or for a particular request (say, I narrowed down a problem to "requests which have header FOO set to BAR" and want to follow one of them to get a detailed view of what it does). Same with logging sinks, for example, a request initiated by a test should get logged to its *testing.T with maximum verbosity, so that I get a detailed context about it if and only if the test fails to immediately start debugging. These settings should be possible during runtime without a restart. If I am debugging a production issue, I don't want to change a command line flag and restart the service.

Let's try to implement such an API.

We can first narrow down the design space a bit, because we want to use testing.T as a logging sink. A T has several methods that would suit our needs well, most notably Logf. This suggest an interface for logging sinks that looks somewhat like this:

type Logger interface {
    Logf(format string, v ...interface{})
}

type simpleLogger struct {
    w io.Writer
}

func (l simpleLogger) Logf(format string, v ...interface{}) {
    fmt.Fprintf(l.w, format, v...)
}

func NewLogger(w io.Writer) Logger {
    return simpleLogger{w}
}

This has the additional advantage, that we can add easily implement a Discard-sink, that has minimal overhead (not even the allocations of formatting the message):

type Discard struct{}

func (Discard) Logf(format string, v ...interface{}) {}

The next step is to get leveled logging. The easiest way to achieve this is probably

type Logs struct {
    Debug Logger
    Info Logger
    Error Logger
}

func DiscardAll() Logs {
    return Logs{
        Debug: Discard{},
        Info: Discard{},
        Error: Discard{},
    }
}

By putting a struct like this (or its constituent fields) as members of a handler, type or package, we can get the horizontal configurability we are interested in.

To get vertical configurability we can use context.Value - as much as it's frowned upon by some, it is the canonical way to get request-scoped behavior/data in Go. So, let's add this to our API:

type ctxLogs struct{}

func WithLogs(ctx context.Context, l Logs) context.Context {
    return context.WithValue(ctx, ctxLogs{}, l)
}

func GetLogs(ctx context.Context, def Logs) Logs {
    // If no Logs are in the context, we default to its zero-value,
    // by using the ,ok version of a type-assertion and throwing away
    // the ok.
    l, _ := ctx.Value(ctxLogs{}).(Logs)
    if l.Debug == nil {
        l.Debug = def.Debug
    }
    if l.Info == nil {
        l.Info = def.Info
    }
    if l.Error == nil {
        l.Error = def.Error
    }
    return l
}

So far, this is a sane, simple and easy to use logging API. For example:

type App struct {
    L log.Logs
}

func (a *App) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    l := log.GetLogs(req.Context(), a.L)
    l.Debug.Logf("%s %s", req.Method, req.URL.Path)
    // ...
}

The issue with this API, however, is that it is completely inflexible, if we want to preserve useful information like the file and line number of the caller. Say, I want to implement the equivalent of io.MultiWriter. For example, I want to write logs both to os.Stderr and to a file and to a network log service.

I might try to implement that via

func MultiLogger(ls ...Logger) Logger {
    return multiLog{ls}
}

type multiLog struct {
    loggers []Logger
}

func (m *multiLog) Logf(format string, v ...interface{}) {
    for _, l := range m.loggers {
        m.Logf(format, v...)
    }
}

However, now the caller of Logf of the individual loggers will be the line in (*multiLog).Logf, not the line of its caller. Thus, caller information will be useless. There are two APIs currently existing in the stdlib to work around this:

  1. (testing.T).Helper (from Go 1.9) lets you mark a frame as a test-helper. When the caller-information is then added to the log-output, all frames marked as a helper is skipped. So, theoretically, we could add a Helper method to our Logger interface and require that to be called in each wrapper. However, Helper itself uses the same caller-information. So all wrappers must call the Helper method of the underlying `testing.T`*, without any wrapping methods. Even embedding doesn't help, as the Go compiler creates an implicit wrapper for that.
  2. (log.Logger).Output lets you specify a number of call-frames to skip. We could add a similar method to our log sink interface. And wrapping loggers would then need to increment the passed in number, when calling a wrapped sink. It's possible to do this, but it wouldn't help with test-logs.

This is a very similar problem to the ones I wrote about last week. For now, I am using the technique I described as Extraction Methods. That is, the modified API is now this:

// Logger is a logging sink.
type Logger interface {
    // Logf logs a text message with the given format and values to the sink.
    Logf(format string, v ...interface{})

    // Helpers returns a list of Helpers to call into from all helper methods,
    // when wrapping this Logger. This is used to skip frames of logging
    // helpers when determining caller information.
    Helpers() []Helper
}

type Helper interface {
    // Helper marks the current frame as a helper method. It is then skipped
    // when determining caller information during logging.
    Helper()
}

// Callers can be used as a Helper for log sinks who want to log caller
// information. An empty Callers is valid and ready for use.
type Callers struct {
    // ...
}

// Helper marks the calling method as a helper. When using Callers in a
// Logger, you should also call this to mark your methods as helpers.
func (*Callers) Helper() {
    // ...
}

type Caller struct {
    Name string
    File string
    Line int
}

// Caller can be used to determine the caller of Logf in a Logger, skipping all
// frames marked via Helper.
func (*Callers) Caller() Caller {
    // ...
}

// TestingT is a subset of the methods of *testing.T, so that this package
// doesn't need to import testing.
type TestingT interface {
    Logf(format string, v ...interface{})
    Helper()
}

// Testing returns a Logger that logs to t. Log lines are discarded, if the
// test succeeds.
func Testing(t TestingT) Logger {
    return testLogger{t}
}

type testLogger struct {
    t TestingT
}

func (l testLogger) Logf(format string, v ...interface{}) {
    l.t.Helper()
    l.t.Logf(format, v...)
}

func (l testLogger) Helpers() []Helper {
    return []Helper{l.t}
}

// New returns a logger writing to w, prepending caller-information.
func New(w io.Writer) Logger {
    return simple{w, new(Callers)}
}

type simple struct {
    w io.Writer
    c *Callers
}

func (l *simple) Logf(format string, v ...interface{}) {
    l.c.Helper()
    c := l.c.Caller()
    fmt.Fprintf(l.w, "%s:%d: " + format, append([]interface{}{c.File, c.Line}, v...)...)
}

func (l *simple) Helpers() []Helper {
    return []Helper{l.c}
}

// Discard discards all logs.
func Discard() Logger {
    return discard{}
}

type discard struct{}

func (Discard) Logf(format string, v ...interface{}) {
}

func (Discard) Helpers() []Helper {
    return nil
}

// MultiLogger duplicates all Logf-calls to a list of loggers.
func MultiLogger(ls ...Logger) Logger {
    var m multiLogger
    for _, l := range ls {
        m.helpers = append(m.helpers, l.Helpers()...)
    }
    m.loggers = ls
    return m
}

type multiLogger struct {
    loggers []Logger
    helpers []Helper
}

func (m multiLogger) Logf(format string, v ...interface{}) {
    for _, h := range m.helpers {
        h.Helper()
    }
    for _, l := range m.loggers {
        l.Logf(format, v...)
    }
}

func (m multiLogger) Helpers() []Helper {
    return m.helpers
}

It's a kind of clunky API and I have no idea about the performance implications of all the Helper-code. But it does work, so it is, what I ended up with for now. Notably, it puts the implementation complexity into the implementers of Logger, in favor of making the actual consumers of them as simple as possible.

06. August 2017 20:08:56

01. August 2017

RaumZeitLabor

Konservieren, Sammeln, Bewahren - Das RZL im ISG

Am vergangenen Donnerstag hatten wir die Gelegenheit, das Stadtarchiv Mannheim etwas genauer unter die Lupe zu nehmen.

Vom großen Mannheim-Luftbild-Teppich im Foyer des Collini-Centers aus, ging es direkt in die heiligen Registerhallen im ersten Stock. Dort durften Ratsprotokollbücher (einige mit mehr als 10 kg Gewicht!) aus den letzten 356 Jahren bestaunt und mehrere Tonnen Rollregal händisch bewegt werden. Akten der städtischen Ämter, Bauunterlagen und historisches Karten- und Plakatmaterial lagern hier in säurefesten Kartons und Magazinen – Insgesamt mehr als 10 Kilometer Archivmaterial, wenn man es aneinanderreihen würde.

Weiter ging’s ins Digitalisierungszentrum des Stadtarchivs: Wertvolle und unersetzliche Dokumente werden dort unter anderem mit Hilfe von Großformat- und Buchscanner für die Zukunft konserviert. Trotz moderner Hilfsmittel werden viele Dokumente noch händisch digitalisiert – OCR funktioniert leider bei vielen Handschriften nicht.

Wer im Onlinearchiv recherchieren oder Einblick in Archivgut nehmen möchte, den führt der Weg in einen der Lesesäle. Nach einem Besuch im Fritz-Cahn-Garnier-Lesesaal endete unsere Tour vor der Informationswand zum bevorstehenden Umzug in den Ochsenpferchbunker.

Unser Dank gilt Herrn Dr. Stockert und Herrn Dr. Schenk für die tolle und informative Tour - wir freuen uns aufs neue „Marchivum“ in der Neckarstadt!

Stadtarchiv Impressionen

by flederrattie at 01. August 2017 00:00:00

30. July 2017

Mero’s Blog

The trouble with optional interfaces

tl;dr: I take a look at the pattern of optional interfaces in Go: what they are used for, why they are bad and what we can do about it.

Note: I wrote most of this article on Wednesday, with the intention to finish and publish it on the weekend. While I was sleeping, Jack Lindamood published this post, which talks about much of the same problems. This was the exact moment I saw that post :) I decided, to publish this anyway; it contains, in my opinion, enough additional content, to be worth it. But I do encourage to (also?) read his post.

What are optional interfaces?

Optional interfaces are interfaces which can optionally be extended by implementing some other interface. A good example is http.Flusher (and similar), which is optionally implemented by an http.ResponseWriter. If a request comes in via HTTP/2, the ResponseWriter will implement this interface to support HTTP/2 Server Push. But as not all requests will be over HTTP/2, this isn't part of the normal ResponseWriter interface and instead provided via an optional interface that needs to be type-asserted at runtime.

In general, whenever some piece of code is doing a type-assertion with an interface type (that is, use an expression v.(T), where T is an interface type), it is very likely offering an optional interface.

A far from exhaustive list of where the optional interface pattern is used (to roughly illustrate the scope of the pattern):

What are people using them for?

There are multiple reasons to use optional interfaces. Let's find examples for them. Note that this list neither claims to be exhaustive (there are probably use cases I don't know about) nor disjunct (in some cases, optional interfaces will carry more than one of these use cases). But I think it's a good rough partition to discuss.

Passing behavior through API boundaries

This is the case of ResponseWriter and its optional interfaces. The API, in this case, is the http.Handler interface that users of the package implement and that the package accepts. As features like HTTP/2 Push or connection hijacking are not available to all connections, this interface needs to use the lowest common denominator between all possible behaviors. So, if more features need to be supported, we must somehow be able to pass this optional behavior through the http.Handler interface.

Enabling optional optimizations/features

io.Copy serves as a good example of this. The required interfaces for it to work are just io.Reader and io.Writer. But it can be made more efficient, if the passed values also implement io.WriterTo or io.ReaderFrom, respectively. For example, a bytes.Reader implements WriteTo. This means, you need less copying if the source of an io.Copy is a bytes.Reader. Compare these two (somewhat naive) implementations:

func Copy(w io.Writer, r io.Reader) (n int64, err error) {
    buf := make([]byte, 4096)
    for {
        rn, rerr := r.Read(buf)
        wn, werr := w.Write(buf[:rn])
        n += int64(wn)
        if rerr == io.EOF {
            return n, nil
        }
        if rerr != nil {
            return n, rerr
        }
        if werr != nil {
            return n, werr
        }
    }
}

func CopyTo(w io.Writer, r io.WriterTo) (n int64, err error) {
    return r.WriteTo(w)
}

type Reader []byte

func (r *Reader) Read(b []byte) (n int, err error) {
    n = copy(b, *r)
    *r = (*r)[n:]
    if n == 0 {
        err = io.EOF
    }
    return n, err
}

func (r *Reader) WriteTo(w io.Writer) (int64, error) {
    n, err := w.Write(*r)
    *r = (*r)[n:]
    return int64(n), err
}

Copy needs to first allocate a buffer, then copy all the data from the *Reader to that buffer, then pass it to the Writer. CopyTo, on the other hand, can directly pass the byte-slice to the Writer, saving an allocation and a copy.

Some of that cost can be amortized, but in general, its existence is a forced consequence of the API. By using optional interfaces, io.Copy can use the more efficient method, if supported, and fall back to the slow method, if not.

Backwards compatible API changes

When database/sql upgraded to use context, it needed help from the drivers to actually implement cancellation and the like. So it needed to add contexts to the methods of driver.Conn. But it can't just do that change; it would be a backwards incompatible API change, violating the Go1 compatibility guarantee. It also can't add a new method to the interface to be used, as there are third-party implementations for drivers, which would be broken as they don't implement the new method.

So it instead resorted to deprecate the old methods and instead encourage driver implementers to add optional methods including the context.

Why are they bad?

There are several problems with using optional interfaces. Some of them have workarounds (see below), but all of them have drawbacks on their own.

They violate static type safety

In a lot of cases, the consumer of an optional interface can't really treat it as optional. For example, http.Hijacker is usually used to support WebSockets. A handler for WebSockets will, in general, not be able to do anything useful, when called with a ResponseWriter that does not implement Hijacker. Even when it correctly does a comma-ok type assertion to check for it, it can't do anything but serve an error in that case.

The http.Hijacker type conveys the necessity of hijacking a connection, but since it is provided as an optional interface, there is no possibility to require this type statically. In that way, optional interfaces hide static type information.

They remove a lot of the power of interfaces

Go's interfaces are really powerful by being very small; in general, the advice is to only add one method, maybe a small handful. This advice enables easy and powerful composition. io.Reader and io.Writer have a myriad of implementations inside and outside of the standard library. This makes it really easy to, say, read uncompressed data from a compressed network connection, while streaming it to a file and hashing it at the same time to write to some content-addressed blob storage.

Now, this composition will, in general, destroy any optional interfaces of those values. Say, we have an HTTP middleware to log requests. It wants to wrap an http.Handler and log the requests method, path, response code and duration (or, equivalently, collect them as metrics to export). This is, in principle, easy to do:

type logResponder struct {
    http.ResponseWriter
    code int
    set bool
}

func (rw *logResponder) WriteHeader(code int) {
    rw.code = code
    rw.set = bool
    rw.ResponseWriter.WriteHeader(code)
}

func LogRequests(h http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        lr := &logResponder{ResponseWriter: w}
        m, p, start := r.Method, r.Path, time.Now()
        defer func() {
            log.Printf("%s %s -> %d (%v)", m, p, lr.code, time.Now().Sub(start))
        }()
        h(lr, r)
    })
}

But *logResponder will now only support the methods declared by http.ResponseWriter, even if the wrapped ResponseWriter also supports some of the optional interfaces. That is because method sets of a type are determined at compile time.

Thus, by using this middleware, the wrapped handler is suddenly unable to use websockets, or HTTP/2 server push or any of the other use cases of optional interfaces. Even worse: this deficiency will only be discovered at runtime.

Optimistically adding the optional interface's methods and type-asserting the underlying ResponseWriter at runtime doesn't work either: handlers would incorrectly conclude the optional interface is always present. If the underlying ResponseWriter does not support adding at the underlying connection there just is no useful way to implement http.Hijacker.

There is one way around this, which is to dynamically check the wrapped interface and create a type with the correct method set, e.g.:

func Wrap(wrap, with http.ResponseWriter) http.ResponseWriter {
    var (
        flusher http.Flusher
        pusher http.Pusher
        // ...
    )
    flusher, _ = wrap.(http.Flusher)
    pusher, _ = wrap.(http.Pusher)
    // ...
    if flusher == nil && pusher == nil {
        return with
    }
    if flusher == nil && pusher != nil {
        return struct{
            http.ResponseWriter
            http.Pusher
        }{with, pusher}
    }
    if flusher != nil && pusher == nil {
        return struct{
            http.ResponseWriter
            http.Flusher
        }{with, flusher}
    }
    return struct{
        http.ResponseWriter
        http.Flusher
        http.Pusher
    }{with, flusher, pusher}
}

This has two major drawbacks:

  • Both code-size and running time of this will increase exponentially with the number of optional interfaces you have to support (even if you generate the code).
  • You need to know every single optional interface that might be used. While supporting everything in net/http is certainly tenable, there might be other optional interfaces, defined by some framework unbeknownst to you. If you don't know about it, you can't wrap it.

What can we use instead?

My general advice is, to avoid optional interfaces as much as possible. There are alternatives, though they also are not entirely satisfying.

Context.Value

context was added after most of the optional interfaces where already defined, but its Value method was meant exactly for this kind of thing: to pass optional behavior past API boundaries. This will still not solve the static type safety issue of optional interfaces, but it does mean you can easily wrap them.

For example, net/http could instead do

var ctxFlusher = ctxKey("flusher")

func GetFlusher(ctx context.Context) (f Flusher, ok bool) {
    f, ok = ctx.Value(ctxFlusher).(Flusher)
    return f, ok
}

This would enable you to do

func ServeHTTP(w http.ResponseWriter, r *http.Request) {
    f, ok := http.GetFlusher(r.Context())
    if ok {
        f.Flush()
    }
}

If now a middleware wants to wrap ResponseWriter, that's not a problem, as it will not touch the Context. If a middleware wants to add some other optional behavior, it can do so easily:

type Frobnicator interface{
    Frobnicate()
}

var ctxFrobnicator = ctxKey("frobnicator")

func GetFrobnicator(ctx context.Context) (f Frobnicator, ok bool) {
    f, ok = ctx.Value(ctxFrobnicator).(Frobnicator)
    return f, ok
}

As contexts form a linked list of key-value-pairs, this will interact nicely with whatever optional behavior is already defined.

There are good reasons to frown upon the usage of Context.Value; but they apply just as much to optional interfaces.

Extraction methods

If you know an interface type that is probable to be wrapped and also has optional interfaces associated it is possible to enforce the possibility of dynamic extension in the optional type. So, e.g.:

package http

type ResponseWriter interface {
    // Methods…
}

type ResponseWriterWrapper interface {
    ResponseWriter

    WrappedResponseWriter() ResponseWriter
}

// GetFlusher returns an http.Flusher, if res wraps one.
// Otherwise, it returns nil.
func GetFlusher(res ResponseWriter) Flusher {
    if f, ok := res.(Flusher); ok {
        return f
    }
    if w, ok := res.(ResponseWriterWrapper); ok {
        return GetFlusher(w.WrappedResponseWriter())
    }
    return nil
}

package main

type logger struct {
    res ResponseWriter
    req *http.Request
    log *log.Logger
    start time.Time
}

func (l *logger) WriteHeader(code int) {
    d := time.Now().Since(l.start)
    l.log.Write("%s %s -> %d (%v)", l.req.Method, l.req.Path, code, d)
    l.res.WriteHeader(code)
}

func (l *logger) WrappedResponseWriter() http.ResponseWriter {
    return l.res
}

func LogRequests(h http.Handler, l *log.Logger) http.Hander {
    return http.HandlerFunc(res http.ResponseWriter, req *http.Request) {
        res = &logger{
            res: res,
            req: req,
            log: l,
            start: time.Now(),
        }
        h.ServeHTTP(res, req)
    }
}

func ServeHTTP(res http.ResponseWriter, req *http.Request) {
    if f := http.GetFlusher(res); f != nil {
        f.Flush()
    }
}

This still doesn't address the static typing issue and explicit dependencies, but at least it enables you to wrap the interface conveniently.

Note, that this is conceptually similar to the errors package, which calls the wrapper-method "Cause". This package also shows an issue with this pattern; it only works if all wrappers use it. That's why I think it's important for the wrapping interface to live in the same package as the wrapped interface; it provides an authoritative way to do that wrapping, preventing fragmentation.

Provide statically typed APIs

net/http could provide alternative APIs for optional interfaces that explicitly include them. For example:

type Hijacker interface {
    ResponseWriter
    Hijack() (net.Conn, *bufio.ReadWriter, error)
}

type HijackHandler interface{
    ServeHijacker(w Hijacker, r *http.Request)
}

func HandleHijacker(pattern string, h HijackHandler) {
    // ...
}

For some use cases, this provides a good way to side-step the issue of unsafe types. Especially if you can come up with a limited set of scenarios that would rely on the optional behavior, putting them into their own type would be viable.

The net/http package could, for example, provide separate ResponseWriter types for different connection types (for example HTTP2Response). It could then provide a func(HTTP2Handler) http.Handler, that serves an error if it is asked to serve an unsuitable connection and otherwise delegates to the passed Handler. Now, the programmer needs to explicitly wire a handler that requires HTTP/2 up accordingly. They can rely on the additional features, while also making clear what paths must be used over HTTP/2.

Gradual repair

I think the use of optional interfaces as in database/sql/driver is perfectly fine - if you plan to eventually remove the original interface. Otherwise, users will have to continue to implement both interfaces to be usable with your API, which is especially painful when wrapping interfaces. For example, I recently wanted to wrap importer.Default to add behavior and logging. I also needed ImporterFrom, which required separate implementations, depending on whether the importer returned by Default implements it or not. Most modern code, however, shouldn't need that.

So, for third party packages (the stdlib can't do that, because of compatibility guarantees), you should consider using the methodology described in Russ Cox' excellent Codebase Refactoring article and actually deprecate and eventually remove the old interface. Use optional interfaces as a transition mechanism, not a fix.

How could Go improve the situation?

Make it possible for reflect to create methods

There are currently at least two GitHub issues which would make it possible to do extend interfaces dynamically: reflect: NamedOf, reflect: MakeInterface. I believe this would be the easiest solution - it is backwards compatible and doesn't require any language changes.

Provide a language mechanism for extension

The language could provide a native mechanism to express extension, either by adding a keyword for it or, for Go2, by considering to make extension the default behavior for interface->struct embedding. I'm not sure either is a good idea, though. I would probably prefer the latter, because of my distaste for keywords. Note, that it would still be possible to then compose an interface into a struct, just not via embedding but by adding a field and delegation-methods. Personally, I'm not a huge fan of embedding interfaces in structs anyway except when I'm explicitly trying to extend them with additional behavior. Their zero-value is not usable, so it requires additional hoops to jump through.

Conclusion

I recommend:

  • If at all possible, avoid optional interfaces in APIs you provide. They are just too inconvenient and un-Go-ish.
  • Be careful when wrapping interfaces, in particular when there are known optional interfaces for them.

Using optional interfaces correctly is inconvenient and cumbersome. That should signal that you are fighting the language. The workarounds needed all try to circumvent one or more design decision of Go: to value composition over inheritance, to prefer static typing and to make computation and behavior obvious from code. To me, that signifies that optional interfaces are fundamentally not a good fit for the language.

30. July 2017 18:39:00

24. July 2017

michael-herbst.com

[c¼h] Parallelised numerics in Python: An introduction to Bohrium

Thursday a week ago I gave a brief introductory talk in our Heidelberg Chaostreff about the Bohrium project. Especially after the HPC day at the Niels Bohr Institute during my recent visit to Copenhagen, I became rather enthusiastic about Bohrium and wanted to pass on some of my experiences.

The main idea of Bohrium is to build a fully numpy-compatible framework for high-performance computing, which can automatically parallelise numpy array operations and/or execute them on a general-purpose graphics cards. The hope is that this eradicates the step of rewriting a prototypical python implementation of a scientific model in more low-level languages like C++ or CUDA before dealing with the actual real-world problems in mind.

In practice Bohrium achieves this by translating the python code (via some intermediate steps) into small pieces of C or CUDA code. These are then automatically compiled at runtime of the script, taking into account the current hardware setup, and afterwards executed. The results of such a just-in-time compiled kernel are again available in numpy-like arrays and can be passed to other scripts for post-processing, e.g. plotting in matplotlib.

It is important to note, that the effect of Bohrium is limited to array operations. So for example the usual Python for loops are not touched. This is, however, hardly a problem if the practice of so-called array programming is followed. In array programming one avoids plain for-loops and similar traditional python language elements in preference for special syntax which works on blocks (numpy arrays) of data at once. Examples of such operations is pretty much the typical numpy workflow:

  • views and slices: array[1:3]
  • broadcasting: array[:, np.newaxis]
  • elementwise operations: array1 * array2
  • reduction: np.sum(array1, axis=1)

A slightly bigger drawback of Bohrium is, that the just-in-time compilation takes time, where no results are produced. In other words Bohrium does only start to pay of at larger problem sizes or if exactly the same sequence of instructions is to be executed many times.

In my c¼h I demonstrate Bohrium by the means of this example script

#!/usr/bin/env python3

import numpy as np
import sys
import time

def moment(n, a):
  avg = np.sum(a*a)/a.size
  return float(np.sum( (a - avg)**n ) / a.size)


def compute(a):
  start = time.time()

  mean = float(np.sum(a) / a.size)
  var  = moment(2, a)
  m3   = moment(3, a)
  m4   = moment(4, a)

  end = time.time()

  fmt = "After {0:8.3f}s: {1:8.3f} {2:8.3f} {3:8.3f} {4:8.3f}"
  print(fmt.format(end - start, mean, var, m3, m4))


def main(size):
  for i in range(6):
    compute(np.random.rand(size))


if __name__ == "__main__":
  size = 30
  if len(sys.argv) >= 2:
    size = int(sys.argv[1])
  main(size)

which is also available for download. The script performs a very simple analysis of the (randomly generated) input data: It computes some statistical moments and displays them to the user. For bigger arrays the single-threaded numpy starts to get very slow, whereas the multi-threaded Bohrium version wins even thought it needs to compile first. Running the script with Bohrium does not require one to change even a single line of code! Just

python3 -m bohrium ./2017.07.13_moments.py

does kick off the automatic parallelisation.

The talk has been recorded and is available on youtube. Note, that the title of the talk and the description are German, but the talk by itself is in English.

by Michael F. Herbst at 24. July 2017 22:00:00

22. July 2017

Mero’s Blog

Using Hilbert Curves to 100% Zelda

tl;dr: I used Hilbert Curves to make it quicker to walk through a list of locations on a map, so I could could fully complete a video game.

As you probably know recently the question of what the best Zelda Game is was finally settled by Breath Of The Wild. Like most people I know I ended up playing. And to keep me engaged I early on decided that I would get as close as possible to 100% of the game before finishing it. That is I wanted to finish all shrines, find all Korok Seeds, max out all armor and do all quests before killing Ganon. I recently finished that and finally killed Ganon. Predictably, I was in for a disappointment:

98.59%

98.59 percent! I did expect that though. The reason is that only certain things count into the percentage as displayed; Korok Seeds are one of them, Shrines are another. But it also counts landmarks and locations as shown on the map. Each contributes 1/12% to the total.

So I started on the onerous task of finding the last 17 locations. I'm not above using help for that so I carefully scrolled through an online map of the BotW universe, maticulously comparing the locations on it with the ones already on my in-game map. Anything I haven't visited was marked and visited. But that only put me to 99.58%; I was still missing 5 locations. apparently I didn't compare carefully enough.

I needed a more systematic approach. I started to instead go through an alphabetical list of locations, looking up each on the map and see if I already had it mapped. But that got old really quickly. Alphabetical just wasn't a great way to organize these; I wanted a list that I could systematically check. But I didn't want it alphabetically but geographically. I didn't want to have to jump around the map to try and find the next one. Which is when I realized that this would be the perfect application for a Hilbert curve.

If you don't know (though you should really just read the Wikipedia Article), the Hilbert curve is a space filling fractal curve, that is a continuous bijective map from the real number line to the plane. It is iteratively defined as the limit of finite curves that get denser and denser. One of the most interesting properties of the curve and its finite approximations is that points that are close on the real number line get mapped to points that are close in the plane. So if we could extract all locations from the online map, figure out for each what real number gets mapped to that point and order the locations by those numbers, we'd get a list of locations where neighbors in the list are close to each other on the map. Presumably, that would make for easy checking of the list: The next location should be pretty much neighbouring the previous one and if I can't find a location nearby, chances are that I didn't visit it yet (and I can then look it up specifically).

[edit] Commentors on reddit and Hacker news have pointed out correctly, that all curves satisfy the property that near point on the line map to near points on the plane. What makes the Hilbert Curve special, is that we work on finite approximations and with the Hilbert Curve, we don't have to worry about the "correct" level of discrete approximation.

To see what that means, we can look at a zig-zag curve. Say, we split our map into a 100000x100000 grid and move in a zig-zag, left-to-right, top-to-bottom. Given how sparse our point-set is, this would mean that most of the rows are empty and some of them would have only one point on them. So we wozuld have to constantly move along the entire width of the map. On the other hand, if we split it into a 2x2 grid, it wouldn't be very helpful; a lot of points would end up in the same quadrant, which would be very large, so we wouldn't have won anything. So there would have to be some fineness of the grid that's "just right" somewhere in the middle, which we'd need to find.

On the other hand with Hilbert Curves, this isn't a problem. That's because the limit of the finite approximations is continuous (which isn't the case with the limit of zig-zag curves). What that means, in essence, is that where a point falls on the curve won't jump around a lot when we make our grid finer, it will "home in" to its final location on the continuous curve. A first order Hilbert Curve is just a zig-zag curve, so it has the same problem as the 2x2-grid zig-zag line. But as we increase it's order, the points will just become more and more local, instead of requiring scanning empty space. That is the interesting consequence of the Hilbert curve being space-filling.

Really, this video explains it much better than I ever could (even though I find the example given there slightly ridiculous). In the end, I mostly agree with the commentors; it probably wouldn't have been too hard to find a good approximation that would make a zig-zag curve work well. But I had Hilbert Curves ready anyway and appreciated the opportunity to usue them.[/edit]

The first step for this was to get a list of locations and their corresponding positions. I was pretty sure that the online map should have that available somehow, as it uses some Google Maps framework to draw the map. So I looked at the network tab of the Chrome developer tools, found the URL that loaded the landmark data, copied the request as curl and saved the output for further massaging.

Chrome developer tools - copy as cURL

The returned file turns out to not actually be JSON (that'd be too easy, I guess) but some kind of javascript-code which is then probably eventually eval'd to get the data (edit: It has been pointed out, that this is just JSONP. I was aware that this is probably the case, but didn't feel comfortable using the term, as I don't know enough about it. I also didn't consider it very important) :

/**/jQuery31106443585752152035_1500757689075(/* json-data */)

I just removed everything but the actual JSON with my editor and ran it through a pretty-printer, to get at it's actual structure. I spare you the details; it turns out the list of locations isn't even simply contained in that it's embedded as another string, with HTML tags, as a property (twice!).

So I quickly hacked together some go code to dissect the data and voilà: Got a list of location names with the corresponding positions:

func main() {
    var data struct {
        Parse struct {
            Properties []struct {
                Name    string `json:"name"`
                Content string `json:"*"`
            } `json:"properties"`
        } `json:"parse"`
    }

    if err := json.NewDecoder(os.Stdin).Decode(&data); err != nil {
        panic(err)
    }

    var content string

    for _, p := range data.Parse.Properties {
        if p.Name == "description" {
            content = p.Content
        }
    }

    if content == "" {
        panic("no content")
    }

    var landmarks []struct {
        Type     string
        Geometry struct {
            Type        string
            Coordinates []float64
        }
        Properties struct {
            Type string
            Id   string
            Name string
            Link string
            Src  string
        }
    }

    if err := json.NewDecoder(arrayReader(content)).Decode(&landmarks); err != nil {
        panic(err)
    }

    for _, m := range landmarks {
        fmt.Printf("%s: %v\n", m.Properties.Name, m.Geometry.Coordinates)
    }
}

func arrayReader(s string) io.Reader {
    s = strings.TrimSuffix(strings.TrimSpace(s), ",")
    return io.MultiReader(strings.NewReader("["), strings.NewReader(s), strings.NewReader("]"))
}

This bode well. Now all I needed to do was to calculate the Hilbert Curve coordinate for each of them and I'd have what I need. The Wikipedia Article helpfully contains an implementation of the corresponding algorithm in C. xy2d assumes a discrete grid of n² cells and returns an integer preimage of the given coordinates. The coordinates we have are all floating point numbers between 0 and 2 (ish) with 5 significant digits. I figured that 65536 should be able to represent the granularity of points well enough, so I chose that as an n, ported the code to go, sorted the locations accordingly and it actually worked!

func main() {
    // Same stuff as before

    sort.Slice(landmarks, func(i, j int) bool {
        xi := f2d(landmarks[i].Geometry.Coordinates[0])
        yi := f2d(landmarks[i].Geometry.Coordinates[1])
        xj := f2d(landmarks[j].Geometry.Coordinates[0])
        yj := f2d(landmarks[j].Geometry.Coordinates[1])
        di := xy2d(1<<16, xi, yi)
        dj := xy2d(1<<16, xj, yj)
        return di < dj
    })

    for _, m := range landmarks {
        fmt.Printf("%s: %v\n", m.Properties.Name, m.Geometry.Coordinates)
    }
}

func xy2d(n, x, y int) int {
    var d int
    for s := n / 2; s > 0; s = s / 2 {
        var rx, ry int
        if (x & s) > 0 {
            rx = 1
        }
        if (y & s) > 0 {
            ry = 1
        }
        d += s * s * ((3 * rx) ^ ry)
        x, y = rot(s, x, y, rx, ry)
    }
    return d
}

func rot(n, x, y, rx, ry int) (int, int) {
    if ry == 0 {
        if ry == 1 {
            x = n - 1 - x
            y = n - 1 - y
        }
        x, y = y, x
    }
    return x, y
}

func f2d(f float64) int {
    return int((1 << 15) * f)
}

In the end, there was still a surprising amount of jumping around involved. I don't know whether that's accidental (i.e. due to my code being wrong) or inherent (that is the Hilbert curve just can't map this perfectly well). I assume it's a bit of both. The list also contains the same landmark multiple times. This is because things like big lakes or plains where marked multiple times. It would be trivial to filter duplicates but I actually found them reasonably helpfull when having to jump around.

There might also be better approaches than Hilbert Curves. For example, we could view it as an instance of the Traveling Salesman Problem with a couple of hundred points; it should be possible to have a good heuristic solution for that. On the other hand, a TSP solution doesn't necessarily only have short jumps, so it might not be that good?

In any case, this approach was definitely good enough for me and it's probably the nerdiest thing I ever did :)

100%

22. July 2017 23:56:00

15. July 2017

Insantity Industries

Embedded IPython for script development

Every once and a while, when working with Python-code, there comes the point where I think “well, if I could just have a Python-shell at this very point in the code…”, be it for debugging or for inspection of objects because I’m to lazy to look up the documentation.

The solution is quite simple. Make sure you have the ipython-package installed and just place

import IPython
IPython.embed()

at the point where you want to have your Python-shell. And there you go, an ipython-shell with the full state of the code at the position of usage.

by Jonas Große Sundrup at 15. July 2017 00:00:00

06. July 2017

michael-herbst.com

IWR School: Mathematical Methods for Quantum Chemistry

From 2nd to 6th October 2017 the IWR hosts the school Mathematical Methods for Quantum Chemistry, which aims at shining some light into the simulation of quantum chemical problems and their mathematical properties. I am very happy to be part of the organising committee for this school, especially since I have been missing an opportunity like this myself in the recent years.

Starting from an introduction into the basics of quantum chemistry as well as the required concepts about optimisation and non-linear eigenvalue problems, we will look at ways to tackle the relevant differential equations from a numerical and algorithmic viewpoint. Hands-on practical sessions allow to apply the treatment introduced in the lectures to simple practical problems.

After the school participants will have an idea of the required interplay between the physical modelling, the mathematical representation and the computational treatment needed in this highly interdisciplinary field of research.

Speakers of the school include

The school mainly targets postgraduate students, postdocs and young researchers from mathematics and computational chemistry. Everyone is encouraged to participate actively by giving a short presentation of his or her work during the available sessions of contributed talks.

For further information as well as the program check out our website. The deadline for the application is 8th August 2017 (direct registration link).

We intend to publish the material of the school as we go along at this location, so feel free to check back regularly for updates.

by Michael F. Herbst at 06. July 2017 22:00:00

05. July 2017

RaumZeitLabor

Exkursion ins Stadtarchiv Mannheim

Verehrte Freunde alter Aktenschränke!

Am Donnerstag, den 20. Juli 2017 werden wir eine Tour durch das alte Stadtarchiv Mannheim machen. Ab 15.30 Uhr werden wir die Räumlichkeiten im Collini-Center unter die Lupe nehmen können, etwas zur Geschichte des Stadtarchivs hören und alles Wissenswerte zum bevorstehenden Umzug in den Ochsenpferchbunker erfahren. Außerdem werden wir uns natürlich das Digitalisierungslabor des Stadtarchivs ganz genau anschauen und über die Vor- und Nachteile der digitalen Archivierung sprechen.

Wenn ihr also wissen wollt, was man im Umgang mit alten Archivalien beachten muss, ob immernoch alles auf Mikrofilm gespeichert wird und was genau jetzt eigentlich das Marchivum ist, solltet ihr euch das Ganze nicht entgehen lassen.

Falls ihr dabei sein sollt, stanzt mir bis zum 16. Juli 2017 einfach eine Lochkarte mit eurem Namen und den Namen der Personen, die ihr mitnehmen wollt.

by flederrattie at 05. July 2017 00:00:00

24. June 2017

Insantity Industries

Persistent IRC

Hanging around in IRC is fun. However, when you don’t stay connected all the time (why do you shut down your computer anyways…) you are missing out a lot of that fun! This is obviously an unbearable state, this needs immediate fixing!

Persistent IRC Setup

There are several ways of building a persistent IRC-setup. One is a so called bouncer that works like a proxy that mediates between you and the IRC network and buffers the messages. Another, simpler method is just running a IRC-client on a machine that is always powered on, like a server, that we can reconnect to to read our messages (and of course write new ones).

Requirements

  • A computer that is powered and connected to the internet 247
  • an account on said machine that is accessible via SSH
  • Software installed on said machine: tmux, weechat

Alternatively to tmux, screen can be used if you cannot get tmux installed on the machine you want to run this setup on, it works very similar. This post, however, will focus on the tmux-setup.

SSH

We can access our account via

ssh username@server.name

We should end up in a plain terminal prompt where we can type.

weechat

In this shell we can then start weechat, a fully keyboard-controlled terminal-IRC-client.

Adding a server

To add a server, type

/server add <servername> <serveraddress> -ssl -autoconnect

The options -ssl and -autoconnect are both optional. -ssl will enable encryption to the IRC network by default, -autoconnect will enable autoconnect in the server config, so that our weechat will automatically connect to that server when we start it.

<servername> will be the weechat-internal name for this server, <serveraddress> can also include a port by appending it via <serveraddress>/<port>.

Adding the freenode-network could therefore read

/server add freenode chat.freenode.net/6697 -ssl -autoconnect

Afterwards, we can connect to the freenode-network by issuing

/connect freenode

as the autoconnect only works upon starting of weechat. Alternatively, /quit weechat and start it again we should get autoconnected. And now, we are connected to the freenode-network!

Setting your name

By default, our nick will be set according to our username on the system. It can be changed via

/nick <newnickname>

To change it persistently, we can set the corresponding option in weechat via

/set irc.server.<servername>.nicks "demodude"

to a custom nickname. Generally, options in weechat can be set by /set <optionname> <value> or read by /set <optionname>. Weechat also supports globbing in the optionname, so getting all options for our added server can be done by

/set irc.server.<servername>.*

Joinging channels

Communication on IRC happens in channels. Each channel usually has a certain focus of stuff that happens there. We can then join channels via

/join #mytestchannel

which is a very boring channel as no one except for ChanServ and us is here, which you can see on the user list for this channel on the right. But it technically worked and we can now just type to post messages in this channel.

In the bar right above where we type we see a 2:#mytestchannel. Weechat has the concept of buffers. Each buffer represends one server or channel. The channel we are in is now buffer 2. To get back to our server-buffer, we can type /buffer 1 or hit F5, both will bring us back to buffer 1, which is our server-buffer. To get back to our channel-buffer, /buffer <buffernumber> or F6 will bring you there.

To enter channels on a server upon starting weechat, we can set the option irc.server.<servername>.autojoin, to a comma-separated list of channels "#channel1,#channel2,#channel3". To find all the channels on a IRC-server, we can issue a /list (for freenode, be aware, the list is HUGE).

We can save our changes via /save and exit weechat via /quit.

Scrolling

We can scroll backward and forward by using the PgUp- and PgDown-Keys. If we are at the very bottom of a buffer, weechat will automatically scroll down with incoming messages. If you have scrolled up, weechat will not follow as it assumes you want to read the part you scrolled to.

command recap

  • adding server: /server add <servername> <serveraddress>[/<port>] [-ssl] [-autoconnect]
  • connecting to a server: /connect <servername>
  • joining channels: /join <channelname>
  • switching buffers: /buffer <targetbuffer> or F5/F6
  • leaving channels: /close in channelbuffer
  • scrolling: PgUp, PgDown

We are up and running for IRC now! However, once we exit our weechat, we are no longer connected and missing out all the fun! So our weechat needs to run continuously.

Introducing…

tmux

Usually, upon SSH-disconnect, all of our processes will be killed (including our weechat). This is different with tmux. Tmux allows to reattach to what we have done when last SSH-ing into our server.

So we exit our weechat and are back on our shell. There, we start

tmux

We now see a green bar at the bottom ouf our screen.

This is a tmux and a shell running inside of it.

We can now hit Ctrl+b to tell tmux to await a tmux-command and not forward our typing to the shell but instead interpret it as a command to tmux itself. We can then type d to detach and our shell is gone.

Afterwards, we can reattach to our tmux by running tmux attach and our shell is back! This also works when we detach and then log out of our machine, log in again and then reattach our tmux.

Now the only thing left is running a weechat inside of our tmux and we are done. We can detach (or just close the terminal, also works perfectly fine) and then reattach later to read what we have been missing out on. Our persistent IRC-setup is ready to rumble.

Improving our setup

Up to now we have a running setup for persistent IRC. However, the user-experience of this setup can be significantly improved.

Tab completion

Weechat is capable of tab completion, e.g. when using /set. However, by default, weechat autocompletes the first option it finds fully instead of to the first ., which is the configsection-delimiter in weechat.

To change this, we search for completion options via

/set *completion*

and afterwards we

/set weechat.completion.partial_completion_command on
/set weechat.completion.partial_completion_command_arg on
/set weechat.completion.partial_completion_completion_other on

Weechat plugins

Weechat is a highly extendable software. A full list of extensions can be found here, some of the most useful ones are listed in the following.

You can install all scripts directly from within weechat via

/script install <scriptname>
  • buffers.pl provides a visual list of open buffers on the left side of weechat (from weechat 1.8 onwards this can be replaced by weechat’s built-in buflist, which provides the same feature)
  • beep.pl triggers our terminal bell when we are highlighted, mentioned or queried
  • autojoin_on_invite.py does basically what the name says
  • screen_away.py will detect when we detach from our IRC-setup and report “I’m not here” to the other person if we are queried
  • autosort.py keeps our buffers sorted alphabetically, no matter how many you have open
  • autojoin.py writes our currently joined channels into irc.server.<servername>.autojoin by issuing /autojoin --run

For autosort.py you most likely also want to

/set buffers.look.indenting on
/set irc.look.server_buffer independent

autoconnecting tmux

To automate reattaching to tmux every time you SSH into the machine your persistent IRC-setup is running on, we can put

if [[ $TERM != 'screen' ]]
then
    tmux attach || tmux
fi

at the end of the file ~/.profile.

getting a shell besides your weechat

You can also open more shells in tmux. To do so, hit Ctrl+b and then ‘c’ for create. You will find another buffer (not the weechat-buffer, but the concept in tmux is equivalent) down in your tmux-bar.

The buffers read

<buffernumber>:terminaltitle

You can then switch between buffers via Crtl+b, <buffernumber>.

FUO (frequently used options)

A lot of IRC-network allow registration of usernames to ensure that we can reuse our nick and no one else grabs it. If we have done that, Weechat can automatically identify us upon connecting. To do so, we just need to set the password we chose when registering in the option

irc.server.<servername>.password

However, we just need to be aware that weechat saves that password in plaintext in the configuration.

Weechat can also trigger arbitrary commands when connecting to a server. This is useful for things like self-invites into invite-only-channels or other things that you want to trigger. To use this, we just need to set

irc.server.<servername>.command

to a semicolon-separated list of commands as you would issue them manually in weechat.

by Jonas Große Sundrup at 24. June 2017 00:00:00

22. June 2017

Insantity Industries

Hardwaretriggered Backups

No one wants to do backups, but everyone wants restore once something went wrong. The solution we look for is fully automated backups that trigger once you plug-in your backupdrive (or it appears $somehow).

So our objective is to detect the plugging-in of a USB-stick or harddisk and trigger a backupscript.

udev

The solution is: udev. udev is the solution in most Linux-Distributions responsible for handling hotplug-events.

Those events can be monitored via udevadm monitor. When then plugging in a device, we see events like

KERNEL[1108.237335] add      /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2 (usb)
KERNEL [1108.778873] add    /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6 (scsi)

for different subsystems (USB-subsystem and SCSI-subsystem in this case) which we can hook into and trigger arbitrary commands.

writing udev-rules

To identify a device, we need to take a look at its properties. For a harddisk this can be done by

udevadm info -a -p $(udevadm info -q path -n $devicename)

where $devicename can be sdb or /dev/sdb, for example. udevadm info -q path -n $devicename gives us the indentifier in the /sys-subsystem we already have seen in the output of udevadm monitor, udevadm info -a -p $devicepath uses this identifier to walk through the /sys-subsystem and gives us all the properties that are associated with the device or its parent nodes, for example my thumbdrive’s occurrence in the scsi-subsystem:

KERNELS=="6:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
ATTRS{device_blocked}=="0"
ATTRS{device_busy}=="0"
ATTRS{dh_state}=="detached"
ATTRS{eh_timeout}=="10"
ATTRS{evt_capacity_change_reported}=="0"
ATTRS{evt_inquiry_change_reported}=="0"
ATTRS{evt_lun_change_reported}=="0"
ATTRS{evt_media_change}=="0"
ATTRS{evt_mode_parameter_change_reported}=="0"
ATTRS{evt_soft_threshold_reached}=="0"
ATTRS{inquiry}==""
ATTRS{iocounterbits}=="32"
ATTRS{iodone_cnt}=="0x117"
ATTRS{ioerr_cnt}=="0x2"
ATTRS{iorequest_cnt}=="0x117"
ATTRS{max_sectors}=="240"
ATTRS{model}=="DataTraveler 2.0"
ATTRS{queue_depth}=="1"
ATTRS{queue_type}=="none"
ATTRS{rev}=="PMAP"
ATTRS{scsi_level}=="7"
ATTRS{state}=="running"
ATTRS{timeout}=="30"
ATTRS{type}=="0"
ATTRS{vendor}=="Kingston"

We can use those attributes to identify our device amongst all the devices that we have.

A udev-rule now contains two major component types:

  • matching statements that identify the event and device we want to match
  • action statements that take action on the matched device

which is of course a simplification, but it will suffice for the purpose of this blogpost and most standard applications.

To do so we first create a file /etc/udev/rules.d/myfirstudevrule.rules. The filename doesn’t matter as long as it ends in .rules as only those files will be read by udev.

In this udev-rule, we first need to match our device. To do so, I will pick three of the properties above that sound like they are sufficient to uniquely match my thumbdrive.

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston"

I have added a statement matching the ACTION, as we of course only want to trigger a backup when the device appears. You can also match entire device classes by choosing the matcher-properties accordingly.

To trigger a command, we can add it to the list of commands RUN that shall run when the device is inserted, for example creating the file /tmp/itsalive:

RUN+="/usr/bin/touch /tmp/itsalive"

So our entire (rather lengthy) udev-rule in /etc/udev/rules.d/myfirstudevrule.rules reads

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston", RUN+="/usr/bin/touch /tmp/itsalive"

and we can trigger arbitrary commands with it

trigger long runnig jobs

However, commands in the RUN-list have a time constraint of 180 seconds to run. For a backup, this is likely to be be insufficient. So we need a way to create long-running jobs from udev.

The solution for this is to outsource the command into a systemd-unit. Besides being able to run for longer than 180s, that way we also get proper logging of our backup-command in the journal, which is always good to have.

So we create a file /etc/systemd/system/mybackup.service containing

[Unit]
Description=backup system

[Service]
User=backupuser  # optional, might be yourself, root, …
Type=simple
ExecStart=/path/to/backupscript.sh

We then need to modify the action part of our rule from appending to RUN to read

TAG+="systemd", ENV{SYSTEMD_WANTS}="mybackup.service"

Our entire udev-rule then reads

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston", TAG+="systemd", ENV{SYSTEMD_WANTS}="mybackup.service"

improve usability

To further improve usability, we can additionally append.

SYMLINK+="backupdisk"

This way, an additional symlink /dev/backupdisk will be created upon the device appearing that can be used for scripting.

using a disk in a dockingstation

At home I have a dockingstation for my laptop and the disk I use for local backups is built into the bay of the dockingstation. From my computer’s point of view, this disk is connected over an internal port. When docking onto the station it is not recognized, as internal ports are not monitored for hotplugging. Upon docking, it is therefore necessary to rescan the internal scsi-bus the disk is connected to. This can be done issuing an entire rescan of the scsi-host host1 by

echo '- - -' > /sys/class/scsi_host/host1/scan

In my case the disk is connected to the port at scsi-host 1. You can find out the scsi-host of your disk by running

ls -l /sys/block/sdb

where sda is the device mapper of the block device whose scsi-host you are interested in. This returns a string like

../devices/pci0000:00/0000:00:1f.2/ata1/host1/target0:0:0/0:0:0:0/block/sda

where you can see that for this disk, the corresponding scsi-host is host1. This can then be used to issue the rescan of the correct scsi-bus. The according udev-rule in my case reads

SUBSYSTEM=="platform", ACTION=="change", ATTR{type}=="ata_bay", RUN+="/usr/bin/echo '- - -' > /sys/class/scsi_host/host1/scan"

Afterwards, the disk should show up and can be matched like any other disk as described above.

by Jonas Große Sundrup at 22. June 2017 00:00:00

18. June 2017

Mero’s Blog

How to not use an http-router in go

If you don't write web-thingies in go you can stop reading now. Also, I am somewhat snarky in this article. I intend that to be humorous but am probably failing. Sorry for that

As everyone™ knows, people need to stop writing routers/muxs in go. Some people attribute the abundance of routers to the fact that the net/http package fails to provide a sufficiently powerful router, so people roll their own. This is also reflected in this post, in which a gopher complains about how complex and hard to maintain it would be to route requests using net/http alone.

I disagree with both of these. I don't believe the problem is a lack of a powerful enough router in the stdlib. I also disagree that routing based purely on net/http has to be complicated or hard to maintain.

However, I do believe that the community currently lacks good guidance on how to properly route requests using net/http. The default result seems to be that people assume they are supposed to use http.ServeMux and get frustrated by it. In this post I want to explain why routers in general - including http.ServeMux - should be avoided and what I consider simple, maintainable and scalable routing using nothing but the stdlib.

But why?

But why?

Why do I believe that routers should not be used? I have three arguments for that: They need to be very complex to be useful, they introduce strong coupling and they make it hard to understand how requests are flowing.

The basic idea of a router/mux is, that you have a single component which looks at a request and decides what handler to dispatch it to. In your func main() you then create your router, you define all your routes with all your handlers and then you call Serve(l, router) and everything's peachy.

But since URLs can encode a lot of important information to base your routing decisions on, doing it this way requires a lot of extra features. The stdlib ServeMux is an incredibly simple router but even that contains a certain amount of magic in its routing decisions; depending on whether a pattern contains a trailing slash or not it might either be matched as a prefix or as a complete URL and longer patterns take precedence over shorter ones and oh my. But the stdlib router isn't even powerful enough. Many people need to match URLs like "/articles/{category}/{id:[0-9]+}" in their router and while we're at it also extract those nifty arguments. So they're using gorilla/mux instead. An awful lot of code to route requests.

Now, without cheating (and actually knowing that package counts as cheating), tell me for each of these requests:

  • GET /foo
  • GET /foo/bar
  • GET /foo/baz
  • POST /foo
  • PUT /foo
  • PUT /foo/bar
  • POST /foo/123

What handler they map to and what status code do they return ("OK"? "Bad Request"? "Not Found"? "Method not allowed"?) in this routing setup?

r := mux.NewRouter()
r.PathPrefix("/foo").Methods("GET").HandlerFunc(Foo)
r.PathPrefix("/foo/bar").Methods("GET").HandlerFunc(FooBar)
r.PathPrefix("/foo/{user:[a-z]+}").Methods("GET").HandlerFunc(FooUser)
r.PathPrefix("/foo").Methods("POST").HandlerFunc(PostFoo)

What if you permute the lines in the routing-setup?

You might guess correctly. You might not. There are multiple sane routing strategies that you could base your guess on. The routes might be tried in source order. The routes might be tried in order of specificity. Or a complicated mixture of all of them. The router might realize that it could match a Route if the method were different and return a 405. Or it might not not. Or that /foo/123 is, technically, an illegal argument, not a missing page. I couldn't really find a good answer to any of these questions in the documentation of gorilla/mux for what it's worth. Which meant that when my web app suddenly didn't route requests correctly, I was stumped and needed to dive into code.

You could say that people just have to learn how gorilla/mux decides it's routing (I believe it's "as defined in source order", by the way). But there are at least fifteen thousand routers for go and no newcomer to your application will ever know all of them. When a request does the wrong thing, I don't want to have to debug your router first to find out what handler it is actually going to and then debug that handler. I want to be able to follow the request through your code, even if I have next to zero familiarity with it.

Lastly, this kind of setup requires that all the routing decisions for your application are done in a central place. That introduces edit-contention, it introduces strong coupling (the router needs to be aware of all the paths and packages needed in the whole application) and it becomes unmaintainable after a while. You can alleviate that by delegating to subrouters though; which really is the basis of how I prefer to do all of this these days.

How to use the stdlib to route

Let's build the toy example from this medium post. It's not terribly complicated but it serves nicely to illustrate the general idea. The author intended to show that using the stdlib for routing would be too complicated and wouldn't scale. But my thesis is that the issue is that they are effectively trying to write a router. They are trying to encapsulate all the routing decisions into one single component. Instead, separate concerns and make small, easily understandable routing decisions locally.

Remember how I told you that we're going to use only the stdlib for routing?

Those where lies, plain and simple

We are going to use this one helper function:

// ShiftPath splits off the first component of p, which will be cleaned of
// relative components before processing. head will never contain a slash and
// tail will always be a rooted path without trailing slash.
func ShiftPath(p string) (head, tail string) {
    p = path.Clean("/" + p)
    i := strings.Index(p[1:], "/") + 1
    if i <= 0 {
        return p[1:], "/"
    }
    return p[1:i], p[i:]
}

Let's build our app. We start by defining a handler type. The premise of this approach is that handlers are strictly separated in their concerns. They either correctly handle a request with the correct status code or they delegate to another handler which will do that. They only need to know about the immediate handlers they delegate to and they only need to know about the sub-path they are rooted at:

type App struct {
    // We could use http.Handler as a type here; using the specific type has
    // the advantage that static analysis tools can link directly from
    // h.UserHandler.ServeHTTP to the correct definition. The disadvantage is
    // that we have slightly stronger coupling. Do the tradeoff yourself.
    UserHandler *UserHandler
}

func (h *App) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    if head == "user" {
        h.UserHandler.ServeHTTP(res, req)
        return
    }
    http.Error(res, "Not Found", http.StatusNotFound)
}

type UserHandler struct {
}

func (h *UserHandler) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    id, err := strconv.Atoi(head)
    if err != nil {
        http.Error(res, fmt.Sprintf("Invalid user id %q", head), http.StatusBadRequest)
        return
    }
    switch req.Method {
    case "GET":
        h.handleGet(id)
    case "PUT":
        h.handlePut(id)
    default:
        http.Error(res, "Only GET and PUT are allowed", http.StatusMethodNotAllowed)
    }
}

func main() {
    a := &App{
        UserHandler: new(UserHandler),
    }
    http.ListenAndServe(":8000", a)
}

This seems very simple to me (not necessarily in "lines of code" but definitely in "understandability"). You don't need to know anything about any routers. If you want to understand how the request is routed you start by looking at main. You see that (*App).ServeHTTP is used to serve any request so you :GoDef to its definition. You see that it decides to dispatch to UserHandler, you go to its ServeHTTP method and you see directly how it parses the URL and what the decisions are that it made on its base.

We still need to add some patterns to our application. Let's add a profile handler:

type UserHandler struct{
    ProfileHandler *ProfileHandler
}

func (h *UserHandler) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    id, err := strconv.Atoi(head)
    if err != nil {
        http.Error(res, fmt.Sprintf("Invalid user id %q", head), http.StatusBadRequest)
        return
    }

    if req.URL.Path != "/" {
        head, tail := ShiftPath(req.URL.Path)
        switch head {
        case "profile":
            // We can't just make ProfileHandler an http.Handler; it needs the
            // user id. Let's instead…
            h.ProfileHandler.Handler(id).ServeHTTP(res, req)
        case "account":
            // Left as an exercise to the reader.
        default:
            http.Error(res, "Not Found", http.StatusNotFound)
        }
        return
    }
    // As before
    ...
}

type ProfileHandler struct {
}

func (h *ProfileHandler) Handler(id int) http.Handler {
    return http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
        // Do whatever
    })
}

This may, again, seem complicated but it has the cool advantage that the dependencies of ProfileHandler are clear at compile time. It needs a user id which needs to come from somewhere. Providing it via this kind of method ensures this is the case. When you refactor your code, you won't accidentally forget to provide it; it's impossible to miss!

There are two potential alternatives to this if you prefer them: You could put the user-id into req.Context() or you could be super-hackish and add them to req.Form. But I prefer it this way.

You might argue that App still needs to know all the transitive dependencies (because they are members, transitively) so we haven't actually reduced coupling. But that's not true. Its UserHandler could be created by a NewUserHandler function which gets passed its dependencies via the mechanism of your choice (flags, dependency injection,…) and gets wired up in main. All App needs to know is the API of the handlers it's directly invoking.

Conclusion

I hope I convinced you that routers in and off itself are harmful. Pulling the routing into one component means that that component needs to encapsulate an awful lot of complexity, making it hard to debug. And as no single existing router will contain all the complicated cleverness you want to base your routing decisions on, you are tempted to write your own. Which everyone does.

Instead, split your routing decisions into small, independent chunks and express them in their own handlers. And wire the dependencies up at compile time, using the type system of go, and reduce coupling.

18. June 2017 22:57:21

michael-herbst.com

Lazy matrices in quantum chemistry

On my way back from Copenhagen last Thursday I stopped in Kiel to visit Henrik Larsson and some other people from last year's Deutsche Schülerakademie in Grovesmühle.

Henrik is doing his PhD at the group of Prof. Bernd Hartke and so I took the opportunity and gave a short talk at their group seminar on my progress in Copenhagen (slides are attached below).

Most people in the Hartke group work on global optimisation of structures or reaction pathways or on fitting reactive force fields. Hence my talk turned out to be a little off-topic. On the other hand I had an afternoon of enjoyful and rather unusual discussions, where I learned quite a lot about their work.

Link Licence
Lazy-matrices in quantum chemistry (Slides group seminar talk) Creative Commons License

by Michael F. Herbst at 18. June 2017 22:00:00

25. May 2017

sECuREs Webseite

Auto-opening portal pages with NetworkManager

Modern desktop environments like GNOME offer UI for this, but if you’re using a more bare-bones window manager, you’re on your own. This article outlines how to get a login page opened in your browser when you’re behind a portal.

If your distribution does not automatically enable it (Fedora does, Debian doesn’t), you’ll first need to enable connectivity checking in NetworkManager:

# cat >> /etc/NetworkManager/NetworkManager.conf <<'EOT'
[connectivity]
uri=http://network-test.debian.org/nm
EOT

Then, add a dispatcher hook which will open a browser when NetworkManager detects you’re behind a portal. Note that the username must be hard-coded because the hook runs as root, so this hook will not work as-is on multi-user systems. The URL I’m using is an always-http URL, also used by Android (I expect it to be somewhat stable). Portals will redirect you to their login page when you open this URL.

# cat > /etc/NetworkManager/dispatcher.d/99portal <<EOT
#!/bin/bash

[ "$CONNECTIVITY_STATE" = "PORTAL" ] || exit 0

USER=michael
USERHOME=$(eval echo "~$USER")
export XAUTHORITY="$USERHOME/.Xauthority"
export DISPLAY=":0"
su $USER -c "x-www-browser http://www.gstatic.com/generate_204"
EOT

by Michael Stapelberg at 25. May 2017 09:37:17

19. May 2017

michael-herbst.com

HPC day at Niels Bohr Institute

For the second time I currently have the pleasure of staying a couple of weeks at the eScience group of Prof. Brian Vinter at the Niels Bohr Institute in Copenhagen. Same as last time the atmosphere in the group is very welcoming and I really enjoy the many fruitful discussions I had so far with the PhDs and PostDocs of the group.

Many of them work on the Bohrium project, which aims at providing a simple-to-use high-level interface for performing computations on all sorts of hardware, i.e. both CPUs as well as GPUs or mixed systems. Even though there surely are some performance drawbacks compared to a fully native implementation, many manhours can be saved by just implementing an algorithm once, which then automatically runs on whatever hardware happens to be around. I really like this idea and hopefully we will manage to integrate Bohrium into our linalgwrap linear algebra wrapper libary very soon.

The main purpose of my visit, however, is to continue the work with my long-term collaborator James Avery on our molsturm modular quantum chemistry code and of course on linalgwrap as well. So far the progress is very good and it seems we are soon ready to do a couple of very simple calculations on closed shell atoms or ions.

Just yesterday I furthermore had the pleasure of introducing linalgwrap and our lazy matrices to a wider audience at the high performance computing day here at the Niels Bohr Institute. The slides of my presentation are attached below.

Link Licence
Lazy-matrices for apply-based algorithms (Slides HPC day 2017) Creative Commons License

by Michael F. Herbst at 19. May 2017 22:00:00

28. April 2017

michael-herbst.com

A new home in a new design

Finally I successfully managed to migrate my blog articles from the old wordpress blog at blog.mfhs.eu to their new home right here. Thanks to the awesome static site generator pelican this site now works without any of this horrible php in the background, just static html and some css.

Note that I have not yet replicated all pages from the old blog.mfhs.eu site, but new content, especially new blog articles, will only appear here from now on.

Finally ... things might still be broken, beware ;).

by Michael F. Herbst at 28. April 2017 22:00:00

24. April 2017

michael-herbst.com

look4bas: Small script to search for Gaussian basis sets

In the past couple of days I hacked together a small Python script, which searches through the EMSL basis set exchange library of Gaussian basis sets via the commandline.

Unlike the webinterface it allows to use grep-like regular expressions for matching the names and descriptions of the basis sets. Of course limiting the selection by requiring basis functions for specific elements to be defined is possible as well.

All matching basis sets can be easily downloaded at once. Right now only downloading the basis set files in Gaussian94 format is implemented, however.

The code, further information and some examples can be found on https://github.com/mfherbst/look4bas.

by Michael F. Herbst at 24. April 2017 22:00:00

RaumZeitLabor

Exkursion in den Luxor Filmpalast Bensheim

An nur wenigen Orten liegen Technik und Popkultur so dicht zusammen wie im Kino. Und das gilt ganz besonders für den Luxor Filmpalast Bensheim. Neben einem komplett im Star Wars Design gehaltenen Kinosaal, gibt es dort auch ein Haifischbecken und verschiedenste popkulturelle Exponate, zum Beispiel einen im Back-to-the-Future Stil umgebauten DeLorean, zu sehen.

Grund genug für uns, sich das mal genauer anzusehen.

Wir werden uns am Samstag, den 20. Mai, um 16:00 Uhr im Eingangsbereich des Luxor Filmpalast Bensheim treffen und haben dann Luxor 7 ganz für uns allein. Dabei handelt es sich um den oben erwähnten Kinosaal im Star Wars Look, der außerdem noch mit einem eigenen Loungebereich versehen ist.

Um 16:30 Uhr machen wir dann das, was man in einem Kino normalerweise so tut: Wir werden einen Film schauen. Exklusiv für uns wird es eine Sondervorführung von „Hackers“, dem Meisterwerk aus dem Jahr 1995, geben. HACK THE PLANET!

Nach dem Film bekommen wir von einem technisch versierten Mitarbeiter eine Führung hinter die Kulissen des Kinos und haben, als besonderes Extra, die Gelegenheit das inoffizielle „Museum“, das sich ebenfalls im Kinogebäude befindet, zu besichtigen. Dabei handelt es sich um eine umfangreiche Privatsammlung an Actionfiguren und anderen Merchandiseartikeln, die im Lauf von 35 Jahren zusammengestellt wurde.

Die Veranstaltung wird voraussichtlich gegen 19:30 Uhr enden.

Die Teilnahme an der Exkursion kostet 25€ pro Person. Wie üblich ist die Mitgliedschaft im RaumZeitLabor e.V. keine Voraussetzung um an der Veranstaltung teilnehmen zu können.

Die maximale Teilnehmerzahl ist beschränkt, daher ist eine verbindliche Anmeldung erforderlich. Um euch anzumelden schreibt mir eine Mail mit dem Betreff „Kino Exkursion“.

Die Teilnehmer an der Exkursion sollten sich, wie immer, untereinander abstimmen und möglichst Fahrgemeinschaften bilden. Weitere Informationen zur Exkursion gehen zu gegebener Zeit per Mail an die angemeldeten Teilnehmer.

by blabber at 24. April 2017 00:00:00

16. April 2017

sECuREs Webseite

HomeMatic re-implementation

A while ago, I got myself a bunch of HomeMatic home automation gear (valve drives, temperature and humidity sensors, power switches). The gear itself works reasonably well, but I found the management software painfully lacking. Hence, I re-implemented my own management software. In this article, I’ll describe my method, in the hope that others will pick up a few nifty tricks to make future re-implementation projects easier.

Motivation

When buying my HomeMatic devices, I decided to use the wide-spread HomeMatic Central Control Unit (CCU2). This embedded device runs the proprietary rfd wireless daemon, which offers an XML-RPC interface, used by the web interface.

I find the CCU2’s web interface really unpleasant. It doesn’t look modern, it takes ages to load, and it doesn’t indicate progress. I frequently find myself clicking on a button, only to realize that my previous click was still not processed entirely, and then my current click ends up on a different element that I intended to click. Ugh.

More importantly, even if you avoid the CCU2’s web interface altogether and only want to extract sensor values, you’ll come to realize that the device crashes every few weeks. Due to memory pressure, the rfd is killed and doesn’t come back. As a band-aid, I wrote a watchdog cronjob which would just reboot the device. I also reported the bug to the vendor, but never got a reply.

When I tried to update the software to a more recent version, things went so wrong that I decided to downgrade and not touch the device anymore. This is not a good state to be in, so eventually I started my project to replace the device entirely. The replacement is hmgo, a central control unit implemented in Go, deployed to a Raspberry Pi running gokrazy. The radio module I’m using is HomeMatic’s HM-MOD-RPI-PCB, which is connected to a serial port, much like in the CCU2 itself.

Preparation: gather and visualize traces

In order to compare the behavior of the CCU2 stock firmware against my software, I wanted to capture some traces. Looking at what goes on over the air (or on the wire) is also a good learning opportunity to understand the protocol.

  1. I wrote a Wireshark dissector (see contrib/homematic.lua). It is a quick & dirty hack, does not properly dissect everything, but it works for the majority of packets. This step alone will make the upcoming work so much easier, because you won’t need to decode packets in your head (and make mistakes!) so often.
  2. I captured traffic from the working system. Conveniently, the CCU2 allows SSH'ing in as root after setting a password. Once logged in, I used lsof and ls /proc/$(pidof rfd)/fd to identify the file descriptors which rfd uses to talk to the serial port. Then, I used strace -e read=7,write=7 -f -p $(pidof rfd) to get hex dumps of each read/write. These hex dumps can directly be fed into text2pcap and can be analyzed with Wireshark.
  3. I also wrote a little Perl script to extract and convert packet hex dumps from homegear debug logs to text2pcap-compatible format. More on that in a bit.

Preparation: research

Then, I gathered as much material as possible. I found and ended up using the following resources (in order of frequency):

  1. homegear source
  2. FHEM source
  3. homegear presentation
  4. hmcfgusb source
  5. FHEM wiki

Preparation: lab setup

Next, I got the hardware to work with a known-good software. I set up homegear on a Raspberry Pi, which took a few hours of compilation time because there were no pre-built Debian stretch arm64 binaries. This step established that the hardware itself was working fine.

Also, I got myself another set of traces from homegear, which is always useful.

Implementation

Now the actual implementation can begin. Note that up until this point, I hadn’t written a single line of actual program code. I defined a few milestones which I wanted to reach:

  1. Talk to the serial port.
  2. Successfully initialize the HM-MOD-RPI-PCB
  3. Receive any BidCoS broadcast packet
  4. Decode any BidCoS broadcast packet (can largely be done in a unit test)
  5. Talk to an already-paired device (re-using the address/key from my homegear setup)
  6. Configure an already-paired device
  7. Pair a device

To make the implementation process more convenient, I changed the compilation command of my editor to cross-compile the program, scp it to the Raspberry Pi and run it there. This allowed me to test my code with one keyboard shortcut, and I love quick feedback.

Retrospective

The entire project took a few weeks of my spare time. If I had taken some time off of work, I’m confident I could have implemented it in about a week of full-time work.

Consciously doing research, preparation and milestone planning was helpful. It gave me good sense of my progress and achievable goals.

As I’ve learnt previously, investing in tools pays off quickly, even for one-off projects like this one. I’d recommend everyone who’s doing protocol-related work to invest some time in learning to use Wireshark and writing custom Wireshark dissectors.

by Michael Stapelberg at 16. April 2017 10:20:00

25. March 2017

sECuREs Webseite

Review: Turris Omnia (with Fiber7)

The Turris Omnia is an open source (an OpenWrt fork) open hardware internet router created and supported by nic.cz, the registry for the Czech Republic. It’s the successor to their Project Turris, but with better specs.

I was made aware of the Turris Omnia while it was being crowd-funded on Indiegogo and decided to support the cause. I’ve been using OpenWrt on my wireless infrastructure for many years now, and finding a decent router with enough RAM and storage for the occasional experiment used to not be an easy task. As a result, I had been using a very stable but also very old tp-link WDR4300 for over 4 years.

For the last 2 years, I had been using a Ubiquiti EdgeRouter Lite (Erlite-3) with a tp-link MC220L media converter and the aforementioned tp-link WDR4300 access point. Back then, that was one of the few setups which delivered 1 Gigabit in passively cooled (quiet!) devices running open source software.

With its hardware specs, the Turris Omnia promised to be a big upgrade over my old setup: the project pages described the router to be capable of processing 1 Gigabit, equipped with a 802.11ac WiFi card and having an SFP slot for the fiber transceiver I use to get online. Without sacrificing performance, the Turris Omnia would replace 3 devices (media converter, router, WiFi access point), which yields nice space and power savings.

Performance

Wired performance

As expected, the Turris Omnia delivers a full Gigabit. A typical speedtest.net result is 2ms ping, 935 Mbps down, 936 Mbps up. Speeds displayed by wget and other tools max out at the same values as with the Ubiquiti EdgeRouter Lite. Latency to well-connected targets such as Google remains at 0.7ms.

WiFi performance

I did a few quick tests on speedtest.net with the devices I had available, and here are the results:

Client Down (WDR4300) Down (Omnia) Up
ThinkPad X1 Carbon 2015 35 Mbps 470 Mbps 143 Mbps
MacBook Pro 13" Retina 2014 127 Mbps 540 Mbps 270 Mbps
iPhone SE 226 Mbps 227 Mbps

Compatibility (software/setup)

OpenWrt’s default setup at the time when I set up this router was the most pleasant surprise of all: using the Turris Omnia with fiber7 is literally Plug & Play. After opening the router’s wizard page in your web browser, you literally need to click “Next” a few times and you’re online with IPv4 and IPv6 configured in a way that will be good enough for many people.

I realize this is due to Fiber7 using “just” DHCPv4 and DHCPv6 without requiring credentials, but man is this nice to see. Open source/hardware devices which just work out of the box are not something I’m used to :-).

One thing I ended up changing, though: in the default setup (at the time when I tried it), hostnames sent to the DHCP server would not automatically resolve locally via DNS. I.e., I could not use ping beast without any further setup to send ping probes to my gaming computer. To fix that, for now one needs to disable KnotDNS in favor of dnsmasq’s built-in DNS resolver. This will leave you without KnotDNS’s DNSSEC support. But I prefer ease of use in this particular trade-off.

Compatibility (hardware)

Unfortunately, the SFPs which Fiber7 sells/requires are not immediately compatible with the Turris Omnia. If I understand correctly, the issue is related to speed negotiation.

After months of discussion in the Turris forum and not much success on fixing the issue, Fiber7 now offers to disable speed negotiation on your port if you send them an email. Afterwards, your SFPs will work in media converters such as the tp-link MC220L and the Turris Omnia.

The downside is that debugging issues with your port becomes harder, as Fiber7 will no longer be able to see whether your device correctly negotiates speed, the link will just always be forced to “up”.

Updates

The Turris Omnia’s automated updates are a big differentiator: without having to do anything, the Turris Omnia will install new software versions automatically. This feature alone will likely improve your home network’s security and this feature alone justifies buying the router in my eyes.

Of course, automated upgrades constitute a certain risk: if the new software version or the upgrade process has a bug, things might break. This happened once to me in the 6 months that I have been using this router. I still haven’t seen a statement from the Turris developers about this particular breakage — I wish they would communicate more.

Since you can easily restore your configuration from a backup, I’m not too worried about this. In case you’re travelling and really need to access your devices at home, I would recommend to temporarily disable the automated upgrades, though.

Product Excellence

One feature I love is that the brightness of the LEDs can be controlled, to the point where you can turn them off entirely. It sounds trivial, but the result is that I don’t have to apply tape to this device to dim its LEDs. To not disturb watching movies, playing games or having guests crash on the living room couch, I can turn the LEDs off and only turn them back on when I actually need to look at them for something — in practice, that’s never, because the router just works.

Recovering the software after horribly messing up an experiment is pretty easy: when holding the reset button for a certain number of seconds, the device enters a mode where a new firmware file is flashed to the device from a plugged-in USB memory stick. What’s really nice is that the mode is indicated by the color of the LEDs, saving you other device’s tedious counting which I tend to always start at the wrong second. This is a very good compromise between saving cost and pleasing developers.

The Turris Omnia has a serial port readily accessible via a pin header that’s reachable after opening the device. I definitely expected an easily accessible serial port in a device which targets open source/hardware enthusiasts. In fact, I have two ideas to make that serial port even better:

  1. Label the pins on the board — that doesn’t cost a cent more and spares some annoying googling for a page which isn’t highly ranked in the search results. Sparing some googling is a good move for an internet router: chances are that accessing the internet will be inconvenient while you’re debugging your router.
  2. Expose the serial port via USB-C. The HP 2530-48G switch does this: you don’t have to connect a USB2serial to a pin header yourself, rather you just plug in a USB cable which you’ll probably carry anyway. Super convenient!

Conclusion

tl;dr: if you can afford it, buy it!

I’m very satisfied with the Turris Omnia. I like how it is both open source and open hardware. I rarely want to do development with my main internet router, but when I do, the Turris Omnia makes it pleasant. The performance is as good as advertised, and I have not noticed any stability problems with neither the router itself nor the WiFi.

I outlined above how the next revision of the router could be made ever so slightly more perfect, and I described the issues I ran into (SFP compatibility and an update breaking my non-standard setup). If these aren’t deal-breakers to you, which sounds unlikely, you should definitely consider the Turris Omnia!

by Michael Stapelberg at 25. March 2017 09:40:00