Planet NoName e.V.

10. April 2014

Mero’s Blog

Heartbleed: New certificates

Due to the Heartbleed vulnerability I had to recreate all TLS-keys of my server. Since CACert appears to be mostly dead (or dying at least), I am currently on the lookout for a new CA. In the meantime I switched to self-signed certificates for all my services.

The new fingerprints are:

Service SHA1-Fingerprint
merovius.de 8C:85:B1:9E:37:92:FE:C9:71:F6:0E:C6:9B:25:9C:CD:30:2B:D5:35
blog.merovius.de 1B:DB:45:11:F3:EE:66:8D:3B:DF:63:B9:7C:D9:FC:26:A4:D1:E1:B8
git.merovius.de 65:51:16:25:1A:9E:50:B2:F7:D7:8A:2B:77:DE:DE:0C:02:3C:6C:ED
smtp (mail.merovius.de) 1F:E5:3F:9D:EE:B4:47:AE:2E:02:D8:2C:1E:2A:6C:FC:D6:62:99:F4
jabber (merovius.de) 15:64:29:49:82:0E:8B:76:47:1A:19:5B:98:6F:E4:56:24:D9:69:07

This is of course useless in the general case, but if you already trust my gpg-key, you can use

curl http://blog.merovius.de/2014/04/10/heartbleed-new-certificates.html | gpg

to get this post signed and verified.

10. April 2014 21:28:25

08. April 2014

RaumZeitLabor

BÄM!!!!! Subbahelden-Party

Holy Hacklaceshitstorm, Batman!

Bam! Zap! Pow!

Am 03.05.2014 trifft sich die >>Justice League of Boveristraße<< ab 19 Uhr in
der RZL-Cave! Let’s kick some ass!

Wir bitten euch, eure Capes davor nochmal alle in die Reinigung zu bringen,
die Augenbinden auf Mottenfraß zu überprüfen und eure Laserkanonen,
Schwerter und Schilde auf den neusten Stand zu bringen!

Falls eure Kostüme über die lange Zeit zu klein geworden sind, ist das
natürlich auch kein Problem. Jeder, der ohne passendes Outfit mit uns die
Mächte des Bösen bekämpfen will, wird von uns verkleidet und geschminkt*!

Angemessene Snacks, wie Pizza, Spinat, Spider-Ham und HotDogs werden
natürlich davor besorgt! Es wäre sehr schön, wenn ihr bis zum 01.05.2014 im
supergeheimen Blog Bescheid gebt, ob und mit wie vielen Sidekicks ihr in
eurem Heromobil anreist! Diejenigen von euch, die übermenschliche
Fähigkeiten bei der Küchenarbeit besitzen, dürfen natürlich gerne noch
Knabbersachen mitbringen!

Möge die Macht mit euch sein!

Superheldeneinladung3

* Angemessene Verkleidung ist notwendige Vorraussetzung für die
Teilnahme an der Party, Widerstand ist zwecklos, zu Risiken und
Nebenwirkungen fragen Sie Ministerium für Landwirtschaft und Superheldentum.

by Alexander Brock at 08. April 2014 19:20:58

30. March 2014

RaumZeitLabor

A Song of Cold and Hot Dishes — Feast of Thrones

A Song of Cold and Hot Dishes — Feast of Thrones

Wir werden am Montag, den 7. April, die erste Folge der vierten
Staffel mit einem opulenten Sechs-Gänge-Menü feiern.
Auf euch warten folgende Gerichte aus den Büchern:

  1. Fingerfish/Spinach-filled pastry
  2. Leek Soup
  3. Beef and Bacon Pie
  4. Feldsalat mit Wildschinken
  5. Veal with Peppersauce and Royal Peas
  6. Whitepot | Rice Pudding

Dazu werden authentische, flüssige Erfrischungen gereicht.

Das Festmahl beginnt um 19:30 Uhr.
Bitte meldet euch bis Freitag in den Kommentaren an, damit wir die Anzahl der
Portionen und den Unkostenbeitrag planen können.
don / mxf

by don at 30. March 2014 19:00:32

19. March 2014

ociru.net

runwhen: undefined reference to scan_uint

You’re compiling runwhen with a recent version of skalibs? You may encounter the following error:

1
2
3
4
5
6
7
Making compile/host/rw-match
./compile/host/rw-match.o: In function `main':
rw-match.c:(.text.startup+0x13e): undefined reference to `scan_uint'
collect2: error: ld returned 1 exit status

The following files were not made successfully:
command/rw-match

With skalibs 1.4.2, runwhen won’t compile. The solution is pretty simple, but hard to find if you’re not reading a couple of mailing lists. scan_uint has been refactored to uint_scan in the recent release of skalibs. Just replace scan_uint to uint_scan in rw-match.c and everything compiles nicely.

Thanks a lot to the guys from uberspace.de for pointing me to the solution.

19. March 2014 21:14:00

15. March 2014

sECuREs Webseite

cgit

cgit

cgit ist ein schnelles und gut-aussehendes Webinterface für git, welches in C geschrieben ist. Im Vergleich zu gitweb, trac oder anderen Interfaces ist es mindestens doppelt so schnell und (meiner Meinung nach) um Längen komfortabler.

by Michael Stapelberg at 15. March 2014 17:14:55

07. March 2014

RaumZeitLabor

Ludum Dare 29 im RaumZeitLabor

Am Wochenende vom 25. – 28. April 2014 findet zum 29. Mal der internationale Gamejam “ludum dare” statt. Um 2 Uhr in der Nacht von Freitag auf Samstag fällt der Startschuss, alleine in 48 oder als Team in 72 Stunden ein Spiel zu einem vorgegebenen Thema zu programmieren.

Wer keine Lust hat, in Einsamkeit zu programmieren oder schon immer mal jemandem beim Spiele programmieren zuschauen wollte, ist eingeladen, im RaumZeitLabor vorbeizukommen.

by silsha at 07. March 2014 23:38:33

01. March 2014

RaumZeitLabor

Das RaumZeitLabor trauert

Das RaumZeitLabor trauert.

Wir bedanken uns für euer Mitgefühl. Die Todesanzeige zum Download gibt es hier: Hacklace Todesanzeige

by fnord at 01. March 2014 22:39:38

xeens Blog

Power it off harder

Magic Hardware Fix Howto:

  • power off device
  • remove battery
  • press power button multiple times. Be creative and create a button press symphony worthy of your hardware.
  • reboot and hope for the best.

Background Story you don’t actually care about

I’ve encountered two situations where this helped, where a reboot or even removing the battery and power cable were not enough. Please share this hint because it is not something generally shared among nerds, or only as a joke reference to The IT Crowd.

One was a laptop which came into contact with the river Rhine and didn’t boot properly anymore (or not at all, I can’t remember exactly). The other was my laptop’s Ethernet card which happily created GBit/s links, but did not allow any packets to go through.

Considering I debugged the latter problem for at least three hours thinking I made a configuration error, I should probably try this sooner in the future.

I got the tip originally from Lenovo support, but I do not know how it actually works. My best guess is that there’s some residual charge in the capacitors which puts the hardware in an invalid state. If you have any insight here, please drop me a mail at stefan-blog at yrden.

01. March 2014 17:03:37

28. February 2014

Moredreads Blog

Posts Are Now Signed

My posts on this site are now signed. You can find a gpg-block in the source code of the page. Pass it to gpg, and you get the markdown used for generating the site and check the signiture, if you have my GPG key installed.

Thanks to Merovius for the neat jekyll plugin achieving this!

28. February 2014 23:15:00

25. February 2014

RaumZeitLabor

Die Hoheiten laden zum Aschermettwoch

Einladung_vom_Prinzenpaar

Für alle die es nicht lesen können, hier die Transkription der Einladung ihrer Hoheiten:

“Mette Freunde seid gegrüßt!

Wir von der Rheinischen Zwiebel- und Lachsschinkenmettgesellschaft (RZL) laden Euch herzlich ein, mit unserem diesjährigen Faschingsprinzenpaar Mettina, die 1. & Mettlev, der 1. gemeinsam den Aschermettwoch zu feiern. Am 5. März treffen wir uns um 19.00 Uhr in unseren heilligen Hallen in der Boveristraße.

Damit wir ordentlich vorbereiten können, antwortet bitte bis zum 4. März 2014, 12 Uhr mit einem fröhlichen “Ahoi” auf diese Einladungsmauil, ob ihr bei der Sause dabei seid.”

by Unicorn at 25. February 2014 17:58:48

19. February 2014

Mero’s Blog

go stacktraces

Let's say you write a library in go and want an easy way to get debugging information from your users. Sure, you return errors from everything, but it is sometimes hard to pinpoint where a particular error occured and what caused it. If your package panics, that will give you a stacktrace, but as you probably know you shouldn't panic in case of an error, but just gracefull recover and return the error to your caller.

I recently discovered a pattern which I am quite happy with (for now). You can include a stacktrace when returning an error. If you disable this behaviour by default you should have as good as no impact for normal users, while making it much easier to debug problems. Neat.

package awesomelib

import (
    "os"
    "runtime"
)

type tracedError struct {
    err   error
    trace string
}

var (
    stacktrace bool
    traceSize = 16*1024
)

func init() {
    if os.Getenv("AWESOMELIB_ENABLE_STACKTRACE") == "true" {
        stacktrace = true
    }
}

func wrapErr(err error) error {
    // If stacktraces are disabled, we return the error as is
    if !stacktrace {
        return err
    }

    // This is a convenience, so that we can just throw a wrapErr at every
    // point we return an error and don't get layered useless wrappers
    if Err, ok := err.(*tracedError); ok {
        return Err
    }

    buf := make([]byte, traceSize)
    n := runtime.Stack(buf, false)
    return &tracedError{ err: err, trace: string(buf[:n]) }
}

func (err *tracedError) Error() string {
    return fmt.Sprintf("%v\n%s", err.err, err.trace)
}

func DoFancyStuff(path string) error {
    file, err := os.Open(path)
    if err != nil {
        return wrapErr(err)
    }
    // fancy stuff
}

19. February 2014 02:17:59

28. January 2014

sECuREs Webseite

Wake-On-LAN with Debian on a qnap TS-119P2+

Wake-On-LAN with Debian on a qnap TS-119P2+

Geschrieben von: Michael am 28.01.2014

The original firmware for the qnap TS-119P2+ supports Wake-On-LAN, meaning you can power down your Network Storage (NAS) when you don’t need it and you can easily wake it up by sending it a magic ethernet packet. This is an awesome feature when you are not at home all the time (say, you have a day job) and want to conserve some power without giving up on convenience.

Martin Michlmayr published an excellent website about using Debian on the qnap TS-11x/TS-12x devices, which made it really easy to install Debian on my NAS.

Unfortunately, until very recently, with a standard Linux kernel you could not use Wake-On-LAN with the qnap devices. There were multiple reasons for that:

  1. The Linux ethernet driver for the Marvell MV643xx series chips, which those NAS use, did simply not support configuring the chip for Wake-On-LAN. I fixed this in the Linux kernel on 2013-03-11, the fix was released with Linux 3.10.
  2. On the qnap NAS, there is a microcontroller which also needs to be configured with regards to the power-saving mode it should use. The NAS has a feature called EUP, which stands for “Energy-using Products”, a EU directive for power saving. When you enable EUP, your qnap sleeps so deep, it will not react to the WOL magic packet. This saves another watt or so. To turn this off, qcontrol needed to be patched to provide access to the WOL and EUP bits.
  3. And finally, the Debian kernel just did not enable the CONFIG_MARVELL_PHY configuration option which you need to actually make use of the kernel patch I landed in Linux 3.10. The bug I filed for this was fixed with the linux package in version 3.12.8-1.

Minimum package versions

To use Wake-On-LAN, you’ll need to install linux-image-3.12-1-kirkwood ≥ 3.12.8-1. Furthermore, you’ll need qcontrol ≥ 0.5.2-2. Note that you will also perhaps need to build qcontrol from git to disable the real time clock. Once there is a new package available, I’ll update this paragraph.

Enabling Wake-On-LAN

What you’ll need once is disabling EUP and RTC (real-time clock). You need to disable the RTC because otherwise the NAS is confused about scheduled wake-up and will immediately wake up after you power it down:

qnap # qcontrol eup off
qnap # qcontrol rtc off

Before every shutdown, you need to enable Wake-On-LAN:

qnap # ethtool -s eth0 wol g
qnap # qcontrol wakeonlan on

I like to turn off WOL after booting because I think (haven’t done enough testing to definitely confirm it) that the microcontroller gets confused when it receives a WOL packet while the box is running. In that case, it will immediately power back up after you power down.

Once you enabled WOL, power off the NAS, and try turning it back on from another machine:

qnap # ip link show eth0
qnap # poweroff
x200 $ wakeonlan 00:08:9b:de:22:ff

Note that you must not disconnect the device entirely from power, as the microcontroller will lose its state. That means, when you have a power outage, you need to power on the NAS manually once.

Making the WOL setup persistent

With the following systemd unit, you’ll get WOL disabled during runtime and enabled before powering off:

[Unit]
Description=Enable Wake on LAN on shutdown
# Just for having the correct order when shutting down.
After=qcontrold.service
# Require eth0 to be present before trying to change WOL.
Requires=sys-devices-platform-mv643xx_eth_port.0-net-eth0.device
After=sys-devices-platform-mv643xx_eth_port.0-net-eth0.device

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/ethtool -s eth0 wol d
ExecStart=/usr/sbin/qcontrol wakeonlan off
ExecStop=/sbin/ethtool -s eth0 wol g
ExecStop=/usr/sbin/qcontrol wakeonlan on

[Install]
WantedBy=multi-user.target

You can find the newest version of this service file on github.

Automatically powering off

I wrote a program called dramaqueen, which will power off the NAS once it realizes that there are no more CIFS (Samba) sessions established. In addition to the CIFS checks, you can also set custom inhibitors, for example for running a backup.

To cross-compile dramaqueen for the qnap, use:

$ go get github.com/stapelberg/zkj-nas-tools/dramaqueen
$ GOARCH=arm GOARM=5 go build github.com/stapelberg/zkj-nas-tools/dramaqueen
$ file dramaqueen 
dramaqueen: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV), …

In my setup, once I suspend my workstation (and all other machines using the NAS), the NAS will notice that my session has gone and shut itself down after 10 minutes.

by Michael Stapelberg at 28. January 2014 20:30:00

ociru.net

Compiling runit on EL6

If you try to compile runit on EL6, you may encounter the following error:

1
2
3
4
5
[...]
./compile runit.c
./load runit unix.a byte.a -static/usr/bin/ld: cannot find -lc
collect2: ld returned 1 exit status
make: *** [runit] Error 1

That’s because your system is missing the glibc-static package; use yum install -y glibc-static and your build will succeed.

28. January 2014 07:38:00

27. January 2014

sECuREs Webseite

Securing SuperMicro’s IPMI with OpenVPN

Securing SuperMicro’s IPMI with OpenVPN

Geschrieben von: Michael am 27.01.2014

In my last article, I wrote about my experiences with my new SuperMicro server, and a big part of that article was about the Intelligent Platform Management Interface (IPMI) which is included in the SuperMicro X9SCL-F mainboard I bought.

In that previous article, I already suggested that the code quality of the IPMI firmware is questionable at best, and this article is in part proof and in part mitigation :-).

Getting a root shell on the IPMI

When doing modifications on an embedded system, it is a good idea to have an interactive shell available for much easier and faster testing/debugging. Also, getting a root shell can be considered a prerequisite for the modifications we are about to make.

The following steps are based on Tobias Diedrich’s instructions “How to get root on and secure access to your Supermicro IPMI”.

After downloading the version of the IPMI firmware that is running on my machine from the SuperMicro website (filename SMT_X9_315.zip) and unzipping it, we have a bunch of executables for flashing the firmware plus a file called SMT_X9_315.bin which contains the actual firmware.

Running binwalk(1) on SMT_X9_315.bin reveals:

$ binwalk SMT_X9_315.bin

DECIMAL   	HEX       	DESCRIPTION
-------------------------------------------------------------------------------------------------------
1572864   	0x180000  	CramFS filesystem, little endian size 8372224 version #2 sorted_dirs CRC 0xe0f8f23d, edition 0, 5156 blocks, 1087 files  
9961472   	0x980000  	Zip archive data, at least v2.0 to extract, compressed size: 1124880, uncompressed size: 2331112, name: "kernel.bin"  
11086504  	0xA92AA8  	End of Zip archive 
12058624  	0xB80000  	CramFS filesystem, little endian size 1945600 version #2 sorted_dirs CRC 0x75aaf428, edition 0, 926 blocks, 204 files  

So let’s extract the two CramFS file systems and mount them for inspection:

$ dd if=SMT_X9_315.bin bs=1 skip=1572864 count=8372224 of=cramfs1
$ dd if=SMT_X9_315.bin bs=1 skip=12058624 count=1945600 of=cramfs2
$ mkdir mnt1 mnt2
# mount -o loop -t cramfs cramfs1 mnt1
# mount -o loop -t cramfs cramfs2 mnt2

In mnt1 you’ll find the root file system, and it looks like mnt2 contains vendor-specific branding, i.e. their KVM client, images and CGI binaries for the web interface.

The firmware image itself is not the only binary that you’ll come in contact with when dealing with the IPMI. In “Maintenance → IPMI configuration” you can save your current IPMI configuration into a binary file and restore it later. Interestingly, these files start with the text “Salted__”, which is typical for files encrypted with openssl(1).

And indeed, after a bit of digging, we can find the binary that is responsible for encrypting/decrypting the configuration dumps and a bunch of interesting strings in it:

$ strings mnt1/bin/ipmi_conf_backup_tool | grep -A 1 -B 1 -m 1 openssl
CKSAM1SUCKSAM1SUASMUCIKSASMUCIKS
openssl %s -d -in %s -out %s -k %s
aes-256-cbc

And indeed, we can decrypt the file with the following command:

openssl aes-256-cbc -d -in backup.bin -out backup.bin.dec \
    -k CKSAM1SUCKSAM1SUASMUCIKSASMUCIKS

The resulting backup.bin.dec then contains the magic string ATEN\x01\x00 (where \x01 is a byte with value 1) followed by a tar.gz archive:

dd skip=6 bs=1 status=none if=backup.bin.dec of=backup.tar.gz

The tar.gz archive contains a directory called preserve_config which in turn contains a bunch of configuration files. Interestingly, the full lighttpd.conf lives in that tarball, presumably because you can change the port (they actually run sed(1) on the config file).

Now, the idea is to configure lighttpd in such a way that it will execute a file under our control. You can accomplish this by changing lighttpd.conf as follows:

--- lighttpd.conf.O	1970-01-01 01:00:00.000000000 +0100
+++ lighttpd.conf	2014-01-25 19:30:35.476345845 +0100
@@ -14,7 +14,7 @@
 server.modules              = (
 #                               "mod_rewrite",
 #                               "mod_redirect",
-#                               "mod_alias",
+                                "mod_alias",
 #                               "mod_access",
 #                               "mod_trigger_b4_dl",
 #                               "mod_auth",
@@ -174,7 +174,7 @@
 #server.errorfile-prefix    = "/srv/www/errors/status-"
 
 ## virtual directory listings
-#dir-listing.activate       = "enable"
+dir-listing.activate       = "enable"
 ## select encoding for directory listings
 #dir-listing.encoding        = "utf-8"
 
@@ -224,7 +224,8 @@
 
 #### CGI module
 cgi.assign                 = ( ".pl"  => "/web/perl",
-                               ".cgi" => "" )
+                               ".cgi" => "",
+                               ".sh" => "/bin/sh")
 #
 server.use-ipv6 = "enable"
 
@@ -327,3 +328,5 @@
 #include_shell "echo var.a=1"
 ## the above is same as:
 #var.a=1
+
+alias.url += ( "/root" => "/" )

Now all we need is a custom .sh script somewhere on the file system and we are done. The program that restores backup files is mnt1/bin/restore_file.sh, and if you have a look at it you’ll see that it only copies certain files over from the uploaded tarball.

If you have a really close look, though, you’ll realize that it also copies entire directories like preserve_config/ntp without any extra checking. So let’s put our code in there:

cat > ntp/start_telnet.sh <<'EOT'
#!/bin/sh
/usr/sbin/telnetd -l /bin/sh
EOT

In case you wondered, telnetd is already in the IPMI image since they are using busybox and presumably use telnet while developing :-).

The final step is to create a new tar.gz archive with your modified preserve_config and upload that either in the IPMI web interface or flash it using the lUpdate tool that you can find in the IPMI firmware zip file. While the web interface will accept unencrypted tar.gz files for backwards compatibility, I’m not sure whether lUpdate will accept them, therefore I’ll explain how to properly encrypt it:

$ cat > encrypt.sh <<'EOT'
#!/bin/bash
# The file is encrypted with a static key and consists of ATEN\x01\x00 followed
# by a tar.gz archive.

KEY=CKSAM1SUCKSAM1SUASMUCIKSASMUCIKS
(echo -en "ATEN\x01\x00"; cat $1) | openssl aes-256-cbc -in /dev/stdin -out ${1}.bin -k $KEY
EOT
$ tar czf backup_patched.tar.gz preserve_config
$ ./encrypt.sh backup_patched.tar.gz
$ scp backup_patched.tar.gz.bin box:
box # ./lUpdate -i kcs -c -f ~/backup_patched.tar.gz.bin -r y

After the IPMI rebooted (give it a minute), you should be able to navigate to http://ipmi/root/nv/ntp/start_telnet.sh and get an HTTP/500 error. Afterwards, connect via telnet to the IPMI and you should get a root shell.

Getting OpenVPN to work

Now that we have a root shell, we can try to get OpenVPN to work temporarily and then make it persistent later. The first step is to cross-compile OpenVPN for the armv5tejl architecture which the IPMI uses.

First, download the toolchain (SDK_SMT_X9_317.tar.gz (727 MB)) from SuperMicro’s FTP server and extract it. Run ./BUILD.sh and watch it fail if you have a x86-64 machine. Then, apply the following patch and run ./BUILD.sh again:

--- OpenSSL/openssl/config.O    2014-01-11 13:09:40.012461895 +0100
+++ OpenSSL/openssl/config      2014-01-11 13:10:17.749870032 +0100
@@ -53,6 +53,11 @@
 SYSTEM=`(uname -s) 2>/dev/null`  || SYSTEM="unknown"
 VERSION=`(uname -v) 2>/dev/null` || VERSION="unknown"

+MACHINE="armv5tejl"
+RELEASE="2.6.17.WB_WPCM450.1.3"
+SYSTEM="Linux"
+VERSION="#3 Thu Oct 31 16:15:24 PST 2013"
+

 # Now test for ISC and SCO, since it is has a braindamaged uname.
 #

After at least OpenSSL was built successfully, set up a few variables (based on ProjectConfig-HERMON), download and build OpenVPN:

export CROSS_COMPILE=$PWD/ToolChain/Host/HERMON/gcc-3.4.4-glibc-2.3.5-armv4/arm-linux/bin/arm-linux-
export ARCH=arm
export CROSS_COMPILE_BIN_DIR=$PWD/ToolChain/Host/HERMON/gcc-3.4.4-glibc-2.3.5-armv4/arm-linux/bin
export TC_LOCAL=$PWD/ToolChain/Host/HERMON/gcc-3.4.4-glibc-2.3.5-armv4/arm-linux/arm-linux
export PATH=$CROSS_COMPILE_BIN_DIR:$PATH

mkdir OpenVPN
cd OpenVPN
wget http://swupdate.openvpn.net/community/releases/openvpn-2.3.2.tar.gz
tar xf openvpn-2.3.2.tar.gz
cd openvpn-2.3.2

CFLAGS="-I$PWD/../../OpenSSL/openssl/local/include" \
CPPFLAGS="-I$PWD/../../OpenSSL/openssl/local/include" \
LDFLAGS="-L$PWD/../../OpenSSL/openssl/local/lib -lcrypto -lssl" \
CC=arm-linux-gcc \
  ./configure --enable-small --disable-selinux --disable-systemd \
    --disable-plugins --disable-debug --disable-eurephia \
    --disable-pkcs11 --enable-password-save --disable-lzo \
    --with-crypto-library=openssl --build=arm-linux-gnueabi \
    --host=x86_64-unknown-linux-gnu --prefix=/usr

Now, if you copy that openvpn binary to the IPMI and run it, you’ll notice that the kernel is missing the tun module, so that OpenVPN cannot actually create its tun0 interface. Therefore, let’s enable that module in the kernel configuration and rebuild:

sed -i 's/# CONFIG_TUN is not set/CONFIG_TUN=m/g' \
  Kernel/Host/HERMON/Board/SuperMicro_X7SB3/config \
  Kernel/Host/HERMON/linux/.config \
  Kernel/Host/HERMON/config
./BUILD.sh
ls -l Kernel/Host/HERMON/linux/drivers/net/tun.ko

Now, after copying tun.ko to the IPMI, you can get OpenVPN to work with the following steps:

# insmod /tmp/tun.ko
# mknod /tmp/tun c 10 200
# /tmp/openvpn --config /tmp/openvpn.conf --verb 9

“Properly integrating” OpenVPN

Since I only have one SuperMicro X9SCL-F board and no development environment, I did not want to try to build a complete IPMI firmware and flash it. Instead, I decided to integrate OpenVPN by putting it into the NVRAM, where all the other configs live. That flash partition is 1.3M big, so we don’t have a lot of space, but it’s doable.

First of all, we need a script that will ungzip the OpenVPN binary, load the tun module, create the device node and then start OpenVPN in daemon mode. Furthermore, the script should enable telnet within the VPN for easy debugging, and it should set up iptables rules to block anything but the VPN. I call this script start_openvpn.sh:

#!/bin/sh
# This script will be run multiple times, so exit if the work is already done.
[ -e /tmp/openvpn ] && exit 0
# Do not generate any output, otherwise the lighttpd config may break.
exec >/tmp/ov.log 2>&1

/bin/gunzip -c /nv/ntp/openvpn.gz > /tmp/openvpn
/bin/chmod +x /tmp/openvpn
/sbin/insmod /nv/ntp/tun.ko
/bin/mknod /tmp/tun c 10 200
/tmp/openvpn --config /nv/ntp/openvpn.conf --daemon
/usr/bin/setsid /nv/ntp/telnet_watchdog.sh &

/sbin/iptables -A INPUT -p tcp --dport 1194 -j ACCEPT &&
/sbin/iptables -A INPUT -s 10.137.0.0/24 -j ACCEPT &&
/sbin/iptables -A INPUT -p udp -s 8.8.8.8 -j ACCEPT &&
/sbin/iptables -A INPUT -p udp --sport 123 -j ACCEPT &&
/sbin/iptables -A INPUT -p icmp -s 10.1.2.0/23 -j ACCEPT &&
/sbin/iptables -A INPUT -j DROP

The referenced telnet_watchdog.sh looks as follows:

#!/bin/sh
# /SMASH/chport executes “killall telnetd” and is run after /etc/init.d/httpd
# start, so it will kill our telnetd. This watchdog will restart telnetd
# whenever it gets killed.
while :
do
    /usr/sbin/telnetd -l /bin/sh -b 10.137.0.1 -F
done

The openvpn.conf looks like this:

dev tun
dev-node /tmp/tun
ifconfig 10.137.0.1 10.137.0.2
secret /nv/ntp/openvpn.secret
port 1194
proto tcp-server
user nobody
persist-key
persist-tun
script-security 2
keepalive 10 60
# TODO: This is a lower bound. Depending on your network setup,
# a higher MTU is possible.
link-mtu 1280

I’m using proto tcp-server because I only have SSH port-forwardings available into the management VLAN, otherwise I would just use the default proto udp.

Instead of the lighttpd.conf modifications I described above, this time we can use a simpler way of invoking this script:

echo 'include_shell "/nv/ntp/start_openvpn.sh"' >> lighttpd.conf

Tarball

You can grab a tarball (247 KiB) with all the files you need to extract to the ntp/ subdirectory.

Conclusion

In case a SuperMicro or ATEN engineer is reading this, please add built-in OpenVPN support as a feature ;-).

Apart from that, happy hacking, and enjoy the warm fuzzy feeling of your IPMI interface finally being somewhat secure! :-)

by Michael Stapelberg at 27. January 2014 17:30:00

23. January 2014

Mero’s Blog

Signed blog posts

tl;dr: I sign my blogposts. curl http://blog.merovius.de/2014/01/23/signed-blog-posts.html | gpg

I might have to update my TLS server certificate soon, because the last change seems to have broken the verification of https://merovius.de/. This is nothing too exciting, but it occured to me that I should actually provide some warning or notice in that case, so that people can be sure, that there is nothing wrong. The easiest way to accomplish this would be a blogpost and the easiest way to verify that the statements in that blogpost are correct would be, to provide a signed version. So because of this (and, well, because I can) I decided to sign all my blogposts with my gpg-key. People who know me should have my gpg key so they can verify that I really have written everything I claim.

I could have used jekyll-gpg_clearsign, but it does not really do the right thing in my opinion. It wraps all the HTML in a GPG SIGNED MESSAGE block and attaches a signature. This has the advantage of minimum overhead - you only add the signature itself plus some constant comments of overhead. However, it makes really verifying the contents of a blogpost pretty tedious: You would have to either manually parse the HTML in your mind, or you would have to save it to disk and view it in your browser, because you cannot be sure, that the HTML you get when verifying it via curl on the commandline is the same you get in your browser. You could write a browser-extension or something similar that looks for these blocks, but still, the content could be tempered with (for example: Add the correctly signed page as a comment in a tampered with page. Or try to somehow include some javascript that changes the text after verifying…). Also, the generated HTML is not really what I want to sign; after all I can not really attest that the HTML-generation is really solid and trustworthy, I never read the jekyll source-code and I don't want to, at every update. What I really want to sign is the stuff I wrote myself, the markdown (or whatever) I put into the post. This has the additional advantage, that most markdown is easily parseable by humans, so you can actually have your gpg output the signed text and immediately read everything I wrote.

So this is, what happens now. In every blogpost there is a HTML-comment embedded, containing the original markdown I wrote for this post in compressed, signed and ASCII-armored form. You can try it via

curl http://blog.merovius.de/2014/01/23/signed-blog-posts.html | gpg

This should output some markdown to stdout and a synopsis of gpg about a valid (possibly untrusted, if you don't have my gpg-key) signature on stderr. Neat!

The changes needed in the blog-code itself where pretty minimal. I had however (since I don't want my gpg secret key to be on the server) to change the deployment a little bit. Where before a git push would trigger a hook on the remote repository on my server that ran jekyll, now I have a local script, that wraps a jekyll build, an rsync to the webserver-directory and a git push. gpg-agent ensures, that I am not asked for a passphrase too often.

So, yeah. Crypto is cool. And the procrastinator prevailed again!

23. January 2014 04:04:25

12. January 2014

Atsutane's blog

Distri A ist toller als B

Sie tauchen alle paar Wochen in meinem Feedreader auf, subjektive Bewertungen warum die Linux Distribution A besser als die Distribution B ist. Bleiben diese Vergleiche sachlich und weisen auf Stärken und Schwächen der unterschiedlichen Systeme hin, sind diese Texte ja durchaus lesenswert und interessant. Man wird auf Dinge hingewiesen, denen man selbst bisher vielleicht gar keine Beachtung geschenkt hat. Leider sind das die wenigsten Posts.

Die meisten sind schlicht emotionale Texte, leider oft auch lächerlich. Es werden Dinge miteinander verglichen, ohne dass der Kontext berücksichtigt wird. Ja, eine Distribution, welche binäre Packete bereitstellt ist für die meisten Personen im Alltag angenehmer zu verwenden, als ein Linux From Scratch System zu pflegen. Diese Personen gehören aber auch nicht unbedingt zur LFS Zielgruppe.

Andererseits sind es gerade diese Texte, welche unterhalten und die Stimmung heben können. Wenn man dann am Abend die Nachrichten des Tages gelesen hat und sich fragen muss, wie fern der Realität denn so mancher Politiker lebt, sind es Posts, bei denen es um relativ belanglose Dinge wie den Installer geht, die mir den Abend versüßen. Denn hey, da mögen Kernel und Userspace zwar identisch sein, aber der Installer, den man so oft nutzen muss, der gefällt mir nicht!

Insofern: Vielen Dank, wo bleiben die Posts, dass SteamOS so viel toller als Distribution C ist? :–)

12. January 2014 15:07:54

11. January 2014

sECuREs Webseite

Building a SuperMicro 1U server (with IPMI)

Building a SuperMicro 1U server (with IPMI)

Geschrieben von: Michael am 11.01.2014

Recently, together with a couple friends of mine, we rented a rack in a datacenter. Not just any datacenter, but that’s a story for another time ;-). Each participant can hang up a 1U server in that rack, so I needed to build one.

In this article, I’ll shed some light on the rationale behind which components I ordered and what my experiences with this setup are. Most of the article will cover the IPMI, a remote management interface. Having IPMI permanently available (as opposed to on-demand at my current dedicated server hoster) and influencing which components go into that box are the two killer features for me.

Choice of vendor

There are a number of big hardware companies out there, and in fact many people in this little project just bought a Dell r210 (Dell’s current entry-level server) or an HP machine. While that is certainly an easy option, I wanted fancier remote management capabilities. With Dell, you only get basic IPMI, but no remote console and virtual media functionality, unless you pay for the extra iDRAC board.

Recent SuperMicro mainboards on the other hand come with IPMI 2.0, including remote console and virtual media. Also, their boards generally made a really good impression on me and many people I know. Given that I like assembling my own computers and choosing every single part that goes into the machine, I decided to go with a custom SuperMicro box instead of an “off-the-shelf” server of one of the big players.

Hardware specifics

The cheapest widely available SuperMicro board that I could find is the X9SCL-F. It accepts only Intel Xeon processors, so I ordered the cheapest Xeon I could find: E3-1220 v2. The rationale here is that modern CPUs provide a lot of computing power, but the server workloads I run are far from CPU constrained. There are a couple of interesting features that the CPU has, though. For one it supports AES-NI, an instruction set introduced by Intel to speed up AES encryption/decryption. This makes it feasible to encrypt all data on the disks without paying a big latency/throughput penalty. Furthermore — and that is probably true for all server CPUs manufactured in the last couple of years — it supports Intel VT-x for hardware virtualization, so that I can run KVM, if I chose to.

The combination of the SuperMicro X9SCL-F mainboard and the memory controller on that Xeon processor require unbuffered ECC-RAM. The term “unbuffered” means it should not be registered memory. This is a bit peculiar combination, but you can find RAM modules that fit the bill. I chose to go for 2 modules with 8 GB each (Kingston KVR16E11), which is the maximum supported amount of memory per module. If it turns out that I want more memory in the future, I can just add two more 8 GB modules.

As for the disks, I have had a lot of disks fail in the last couple of years, so nowadays I just buy the enterprise grade disks, typically from Western Digital. For SATA, this means using the WD RE4 WD1003FBYX-0. I bought two of them and run them in a RAID-1 so that one can fail and the box still continues working. Of course, in case of failure, I’ll need to order a new disk, drive to the datacenter and replace the disk. In case you wonder: the particular case I bought does not have hot-swap drive bays, which was not entirely clear to me. With a better case, one could maybe store a hot spare (i.e. a drive in a hot plug tray) at the datacenter and have the remote hands just replace the drive for you.

The case I chose is the SuperMicro SC512L-260, and I regret that. It’s the smallest and cheapest 1U case that SuperMicro sells, and it shows. The power supply unit only has 4 pin molex connectors, no SATA power connectors, so you need adapters. The wiring in the case is far from satisfactory, and the mainboard power cables are just barely long enough. Instead of having drive enclosures and a proper way to put them into the case, you directly mount the drive on the bottom of the case with screws. Of course, actually putting the drive in requires you to take out the fan (which is in the middle between the two drives), otherwise you don’t have enough space to do anything. The case is not very deep, but that’s not an advantage in any way, IMO.

SuperMicro box 1 SuperMicro box 2 SuperMicro box 3

Setting up the machine

Since the machine is located an hour’s drive away, I wanted the remote management functionality to work well. In order to test that, I decided to install the machine “remotely”, and not just boot from USB.

When SuperMicro writes that the X9SCL-F has two ethernet ports, what they really mean is that it has two ports (LAN1 and LAN2) and a dedicated IPMI ethernet port. This was a bit of a surprise, I thought it had only one LAN port and the IPMI port. It’s not a big issue either, but it means that your operating system will find two ethernet adapters when installing, and you need to chose the right one. Also, depending on your Linux distribution (i.e. whether it has predictable interface names or not), you may get LAN1 detected as either eth0 or eth1. This is not a big deal since typically the order is persisted in e.g. /etc/udev/rules.d/70-persistent-net.rules on Debian. However, it is one thing to be aware of when installing a new operating system or partially restoring from a backup.

The first thing that really annoyed me was that the BIOS by default comes with IPMI disabled. There are three options on how IPMI can get its network configuration: statically configured, using DHCP or “do nothing”. Not very helpfully, the default is “do nothing”, and serial console redirection is also disabled in the BIOS. This means you need to hook up the machine to a monitor and a keyboard at least once. Luckily, USB keyboards work just fine. Nevertheless, it is unclear to me why you would chose that setting as a default. For people deploying these (server) mainboards in datacenters, it adds an additional step, whereas people who don’t want the IPMI need to have a way to access the BIOS anyway to change settings (or they could disable IPMI using IPMI…).

When enabling IPMI, do pay attention to the LAN interface setting which controls on which ethernet interface the IPMI is active. The value “failover”, which is the default, means that at boot time, the IPMI will check if there is a link on the dedicated IPMI ethernet interface and fail over to LAN1 otherwise. “Boot time” is somewhat unclear in this context, given that the IPMI BMC boots as soon as there is physical power connected to the machine, no matter whether you actually power up the board. In order to get a deterministic and reliable mode of operation, you should chose either “shared” or “dedicated” instead of “failover”. Dedicated is pretty clear — IPMI will only be active on the dedicated IPMI interface. This is okay if you have enough ethernet cables and ports and don’t mind the extra wiring. In our case, we wanted to avoid that, so I went for “shared”, meaning there will be two MAC addresses on LAN1 (which is the left-most port on the mainboard). You can also configure the shared IPMI to use a VLAN ID. Note that you should update your IPMI firmware to the latest available firmware version before trying to do that (03.15 at the time of writing). Otherwise, VLANs might just not work at all :-).

Note that I later discovered that using one ethernet cable and the “shared” setting is not a good idea. When rebooting, the ethernet port will be disabled and it takes quite some time until the IPMI makes it come up again. Typically, it takes longer than the time frame where you can enter the BIOS. This means you need a cooperating host (or ipmitool(1) on another host) in order to tell it to go the BIOS (see below). If you can, definitely use the dedicated ethernet interface.

The IPMI interface is reachable via HTTP (or HTTPS, but more on that later) on the IP address that you configured or that it got via DHCP. The web interface makes heavy use of JavaScript, but works fine on Chrome and Firefox on Linux. To log in for the first time, use the username ADMIN and password ADMIN.

Using the remote console

While the IPMI’s remote console (called iKVM) is based on VNC and even runs on port 5900, it does use a non-standard authentication protocol. To get access, you typically log in to the web interface, navigate to “Remote Control” → “Console Redirection” and press the “Launch Console” button. The interface will serve a jnlp file, which you can launch using javaws(1).

There are third-party implementations of this protocol for Mac OS X at github.com/thefloweringash/chicken-aten-ikvm, but I haven’t tried them yet.

Note that if you access the IPMI web interface through SSH tunnels with different ports, you’ll need to replace the ports in the jnlp file.

IPMI Remote Media

The remote media functionality on the X9SCL-F with IPMI firmware version 03.15 can only read an image from Windows shares (CIFS). It wants you to specify a hostname or IP address plus the path to the image. The path contains the share name, so if your share is called “isos” and the ISO image is called “debian-netinst.iso”, the path is “\isos\debian-netinst.iso”.

To serve files via CIFS without authentication easily, install the “samba” package on Debian and modify your /etc/samba/smb.conf to contain the following lines:

[global]
  workgroup = WORKGROUP
  server string = x200
  dns proxy = no
  interfaces = eth0
  syslog = 0
  browsable = yes
  map to guest = bad user

  encrypt passwords = true
  passdb backend = tdbsam
  obey pam restrictions = yes
  unix password sync = no

[debian]
  path = /home/michael/debian-images/
  read only = yes
  guest ok = yes
  browseable = yes
  guest account = nobody

I specified “guest” as username and “guest” as password in the webinterface, just to be sure, and that worked fine.

Note that you specifically need to chose “virtual media” as a boot device in the BIOS, though. In my tests, even after changing the boot order to contain virtual media as the first option, this setting would not always persist across multiple reboots (perhaps it gets discarded as soon as you boot without a virtual media image mounted).

Using ipmitool(1) on the host

After installing an operating system, you need to make sure that a bunch of kernel modules are loaded before you get the /dev/ipmi0 device node that ipmitool(1) requires:

modprobe ipmi_si
modprobe ipmi_devintf
modprobe ipmi_msghandler

In order to boot into the BIOS (useful if the IPMI is unreachable during boot because it’s running on the shared ethernet port), use:

ipmitool chassis bootdev bios

Enabling the serial console

Because you can never have enough safety nets, it makes sense to use the serial port in addition to the IPMI.

With systemd, getting a getty started on a serial console is as simple as booting with console=ttyS0 in the kernel command line, which has the nice side effect of also letting the kernel log to serial console. In addition, you’ll want to have grub itself be available on the serial console. On Debian, this works in /etc/default/grub:

GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0 init=/bin/systemd"
GRUB_TERMINAL=console

Installing a custom SSL certificate

In our hosting project, every participant has access to the management VLAN, so everyone can connect to the web interface. Even worse, since the access is all tunneled through the same box, theoretically the box admin (or any attacker with a local root exploit) could sniff the traffic, thus gather the IPMI password (no, they don’t use a challenge response mechanism) and own your box. While the chances of that happening are relatively slim and I trust the other participants, I like end-to-end cryptography and decided I want to properly secure that interface as far as reasonibly possible.

It took me quite a while to get this working, so I’ll be very specific on what the IPMI does. It is running a standard lighttpd to handle HTTP/HTTPS connections and execute CGI binaries. The SSL configuration is:

$SERVER["socket"] == ":443" {
  server.use-ipv6 = "enable"
  ssl.engine = "enable"
  ssl.cipher-list = "TLSv1+HIGH !SSLv2 RC4+MEDIUM !aNULL !eNULL !3DES @STRENGTH"
  ssl.pemfile = "server.pem"
}

In server.pem, the web interface stores the certificate followed by the private key.

What is important to know is that the httpd init script also runs “openssl verify”, probably to make sure that the user-provided certificates actually work and not have the lighttpd crash-loop. What’s unfortunate about this is that “openssl verify” verifies only the first certificate it finds in the PEM file. In case you want to use an SSL certificate that is actually verified by a trusted CA, this implies that the first certificate in the .pem file needs to be the CA certificate itself (because the IPMI does not have a certificate store). In my case, I tried the order “CA, intermediate CA, certificate, private key”. However, lighttpd will not load the certificates and just exit with an error:

2014-01-09 23:51:06: (network.c.601) SSL: Private key does not match the
certificate public key, reason: error:0B080074:x509 certificate
routines:X509_check_private_key:key values mismatch server.pem 

There are two possible workarounds for this problem:

SSL method 1: Get “OK” into the certificate

This is the most fun method. You just need to manage to get the string “OK” into any of your certificate’s fields — the common name will do. Note that this is case sensitive, so if your CA converts hostnames to lower case before issuing the certificate, this won’t work. In case they don’t, a certificate issued for e.g. foobar-OK.stapelberg.de will happily pass the init script’s check. This is because the “openssl verify” output includes the certificate’s fields in the output, and the init script merely greps for “OK”:

# …
openssl verify $CERT_FILE > /tmp/cert.st 2>&1;
if [ -z "`cat /tmp/cert.st | grep OK`" ];then 
# …
else
    echo "SSL certificate verified OK."                    
fi

SSL method 2: Use a self-signed certificate

It’s not quite as clean, but you can just use a self-signed certificate, created as follows:

openssl req -x509 -nodes -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

The common name is where you want to enter the DNS that points to your IPMI. Afterwards, you can just upload cert.pem as certificate and key.pem as private key in the web interface.

Conclusion

The server seems to work pretty well, but the IPMI clearly still needs some work. It’s essentially an embedded computer, to a large part closed source, with questionable security/code quality and only somewhat reliable remote management capabilities. In addition to the inherent port flapping with the “shared” ethernet configuration I described above, the iKVM server also crashed on me once.

Let’s see if SuperMicro (or ATEN, the company that actually makes the IPMI) is willing to improve things :).

by Michael Stapelberg at 11. January 2014 17:00:00

31. December 2013

ociru.net

Highlight what could be highlighted

As last post for 2013, I want to highlight (SCNR) two advantages of having a color screen while working on the Un*x CLI.

ccze

CCZE is a robust and modular log colorizer with plugins for apm, exim, fetchmail, httpd, postfix, procmail, squid, syslog, ulogd, vsftpd, xferlog, and more.

I learned about ccze a few years ago, and since then it promoted itself to daily usage. Get it with apt-get or yum (it’s in EPEL) and enjoy colorful output like

1
$ tail -f /var/log/syslog | ccze

man pages with colors

Highlighting in man pages is possible as well; it’s just not documented well.

1
2
3
4
5
6
7
8
# colorful manpages
export LESS_TERMCAP_mb=$'\E[01;31m'
export LESS_TERMCAP_md=$'\E[01;31m'
export LESS_TERMCAP_me=$'\E[0m'
export LESS_TERMCAP_se=$'\E[0m'
export LESS_TERMCAP_so=$'\E[01;44;33m'
export LESS_TERMCAP_ue=$'\E[0m'
export LESS_TERMCAP_us=$'\E[01;32m'

Try it out! And have a colorful 2014.

31. December 2013 08:27:00

GRML as rescue boot

I already wrote about the benefits of having a rescue system like GRML on hand, so I won’t try to convince you again. But what’s the use of a PXE enabled rescue environment while sitting on a train and screwing up your LVM? There’s no need to search for your rescue thumb drive, because – let’s face it – it’s always in the last pocket you look at or not even with you at all.

With enough (unencrypted!) space in /boot and grub2 as boot loader, it’s possible to boot the GRML ISO via the loopback functionality of grub2. The syntax is pretty much straight forward:

1
2
3
4
5
6
7
8
9
10
11
12
menuentry "GRML Rescue System (grml64-full_2013.09.iso)" {
        insmod part_gpt
        insmod ext2
        set root='(hd0,gpt4)'
        search --no-floppy --fs-uuid --set=root [...]
        iso_path="/grml/grml64-full_2013.09.iso"
        export iso_path
        kernelopts="   "
        export kernelopts
        loopback loop "/grml/grml64-full_2013.09.iso"
        set root=(loop)
        configfile /boot/grub/loopback.cfg

Keep in mind, that all files except of configfile are relative to /boot, so /grml/*.iso is stored as /boot/grml/*.iso on your fully mounted directory hierachy.

If you’re on Debian or *buntu, you don’t have to edit your grub2 config by yourself, there’s a package for that. Just place the downloaded GRML ISO in /boot/grub and install grml-rescueboot.

1
2
3
4
5
$ wget http://download.grml.org/grml64-full_2013.09.iso
$ wget http://download.grml.org/grml64-full_2013.09.iso.sha1
$ sha1sum -c grml64-full_2013.09.iso.sha1
grml64-full_2013.09.iso: OK
$ apt-get install grml-rescueboot

If you want to tweak the boot options, have a look at the GRML Wiki. But now that you’re ready with your mobile rescue system, Murphy won’t find you (again). ;–)

31. December 2013 06:47:00

17. December 2013

RaumZeitLabor

30C3 Congress Everywhere

30C3 Logo

In der Zeit vom 27.12. bis zum 30.12. findet in Hamburg der 30. Chaos Communication Congress (30C3) statt. In den letzten Jahren haben sich begleitende dezentrale Events etabliert bei denen sich Haecksen und Hacker, Nerds und interessierte Nicht-Nerds in lockerer Atmosphäre treffen, um per Streaming an den Vorträgen teilzunehmen, zu diskutieren, Mate zu konsumieren und allgemein eine gute Zeit zu haben. Auch dieses Jahr wird das RaumZeitLabor wieder dabei sein und eines dieser Events, die dieses Jahr unter dem Motto “Congress everywhere” stehen, ausrichten.

Wir versuchen das RaumZeitLabor über den kompletten Event-Zeitraum offen zu halten. Schaut einfach kurz auf den Raumstatus und kommt vorbei. Der freundliche Hackerspace von nebenan freut sich auf euren Besuch.

by thinkJD at 17. December 2013 14:52:57

16. December 2013

xeens Blog

Google Webfonts gives you Comic Sans if your config sucks

A friend just discovered this: if you include a Google Web Font via http on a http site, you get the expected fonts. But should you include them via http on a https site, you get Comic Sans.

Want to see a live demo? Visit http://www.yrden.de – it works like expected. However, visit https://www.yrden.de and see Comics Sans in all its glory. This is definitely a nice hint for configuration errors, although I had a good laugh and will keep it “broken” like this.

Even though I dislike Comic Sans very much, …

I’m not even mad, that’s amazing.

16. December 2013 13:10:00

Mero’s Blog

Incentives in education

tl;dr: I hate software-engineering as it is teached in Heidelberg. Really

I often recited the story of how I got to choose computer science over physics as a minor in my mathematics bachelor:

After sitting through almost one semester of the introductory course to theoretical physics in my 3rd semester — which is incredibly unsatisfactory and boring, once you are past your first year of mathematics — I suddenly realized that my reward for suffering through yet another problem sheet of calculating yet another set of differential operators is, that I have to suffer through four or five more of these type of courses. This really seemed like a poor incentive, when I was just discovering hacking and that I was really good at computer science. So I decided to pass on the opportunity, did not work all night on that last sheet (and later found out that I would have gotten credit for that course without even taking the written exam if I just handed in this additional problem sheet) and instead decided to minor in computer science.

Three years after that I decided to get a second bachelor degree in computer science (I finished my bachelor of mathematics earlier that year and was pursuing my master degree at that point), because it seemed a really easy thing to do at that point: I only needed two semesters more of studies and a bachelor thesis. That is not a lot of work for a degree. We are now one year and some change after that point, and there really is not a lot I need anymore. Basically I only need to finish the introduction to software engineering and then write my thesis. Yay for me.

The reason I write this (and the reason I started with the anecdote of physics) is that once again I am questioning the incentives versus the cost. Since I am pretty sure that it would actually be fun to write my thesis, it all boils down to the question, whether I want to finish this course (which again, I'm more than halfway done with, it is not a lot work to go) to get a bachelor degree in computer science. And don't get me wrong — I'm sure that software engineering is a topic, that can be interesting and captivating, or at the minimum bearable. But the way it is done here in Heidelberg is just hell. It is incredibly boringly presented and consists of a flood of uninteresting repetitive tasks and the project-work, to show how important teamwork and quality-assurance and drawing a lot of diagrams is, is a catastrophically bad, unusable and ugly piece of crapware, that can't even decently perform the very simple task it was designed to (managing a private movie collection. I mean, come on, it is not exactly rocket science to do this in at least a barely usable way).

And even though it is a hell that I would only have to endure for about two or three problem sheets and one written exam, I watch myself putting off the work on it (for example by writing this stupid blogpost) and I seriously question whether this second bachelor is really incentive enough to suffer through it.

If it was my first degree, that would of course be a clear ”yes“. But a second one? Not sure. Ironically the main way I'm putting of work on this problem sheet — I got up today at 10am, to immediately and energetically start to work on it — is watching I lot of TED talks on youtube. That's right, I practically spent 14 hours more or less non-stop watching TED talks. This one applies to some extend — extrinsic incentives can only go this far in making us do some work; at some point, without at least some intrinsic motivation, I at least will not perform very well (or at all):

16. December 2013 01:32:47

09. December 2013

ociru.net

Happy Birthday Doom

20 years ago, on December 10th 1993, id Software reached a milestone in computer gaming history: Doom was released to the public. Presumably the most ported software ever turns twenty!

Doom may not be old enough to buy beer from the liquor store but let’s celebrate anyway – with style of course – the Phobos way:

1
$ apt-get install doomsday doom-wad-shareware

By the way, also in 1993, Microsoft released Windows NT 3.1 and Windows for Workgroups 3.11.

DO YOU WANT TO QUIT TO DOS?
(Y/N) _

09. December 2013 23:24:00

07. December 2013

sECuREs Webseite

CoreOS and Docker: first steps

CoreOS and Docker: first steps

Geschrieben von: Michael am 07.12.2013

CoreOS is a minimal operating system based on Linux, systemd and Docker. In this post I describe how I see CoreOS/Docker and how my first steps with it went.

What is Docker and why is it a good idea?

Finding the right words to describe all of this to someone who has not worked with any of it is hard, but let me try. With docker, you can package software into “containers”, which can then be run either on your own server(s) or at some cloud infrastructure provider (dotcloud, stackdock, digitalocean). An example for software could be cgit, a fast git web interface. Docker spawns each container in a separate Linux container (LXC), mostly for a clean and well-defined environment, not primarily for security. As far as the software is concerned, it is PID 1, with a dynamic hostname, dynamic IP address and a clean filesystem.

Why is this a good idea? It automates deployment and abstracts away from machines. When I have multiple servers, I can run the same, unmodified Docker container on any of them, and the software (I’ll write cgit from here on) doesn’t care at all because the environment that Docker provides is exactly the same. This makes migration painless — be it in a planned upgrade to newer hardware, when switching to a different hoster or because there is an outage at one provider.

Now you might say that we have had such a thing for years with Amazon’s EC2 and similar cloud offerings. Superficially, it seems very similar, the only difference being that you send EC2 a virtual machine instead of a Docker container. These are the two biggest differences for me:

  1. Docker is more modular: whereas having a separate VM for each application you deploy is economically unattractive and cumbersome, with Docker’s light-weight containers it becomes possible.
  2. There is a different mental model in working with Docker. Instead of having a virtualized long-running server that you need to either manually or automatically (with Puppet etc.) keep running, you stop caring about servers and start thinking in terms of software/applications.

Why CoreOS?

CoreOS has several attractive features. It is running read-only, except for the “state partition”, in which you mostly store systemd unit files that launch and supervise your docker containers, plus docker’s local cache of container images and their persistent state. I like read-only environments because they tend to be fairly stable and really hard to break. Furthermore, CoreOS is auto-updating and reuses ChromeOS’s proven updater. There are also some interesting clustering features, like etcd, a highly-available key-value store for service configuration and discovery, which is based on the raft consensus algorithm. This should make master election easy, but at the time of writing it’s not quite there yet.

Of course, if you prefer, you could also just install your favorite Linux distribution and install Docker there, which should be reasonable straight-forward (but took me a couple of hours due to Debian’s old mount(8) version, which has a bug). Personally, I found it an interesting exercise to run in an environment that has very similar constraints to what paid Docker hosting provides. That is, you get to run Docker containers and nothing else.

Conventions throughout this article

For hostnames, home represents your computer at home or work where you build Docker containers before deploying them on your server(s). d0 (d0.zekjur.net) represents a machine running CoreOS or any other operating system with Docker up and running. A dollar sign ($) represents a command executed as the default unprivileged user core, the hash sign (#) represents a command executed as root (use sudo -s to get a root shell on CoreOS).

Note that I assume you have Docker working on home, too.

Step 1: Dockerizing cgit

To create a docker container one can either manually run commands interactively or use a Dockerfile. Since the former approach is very unreliable and error-prone, I recommend using Dockerfiles only for anything more than quick experiments. A Dockerfile starts off of a base image, which can be a tiny environment (busybox) or, more typically, a Linux distribution’s minimal installation like tianon/debian in our case.

After specifying the base image, you can run arbitrary commands with the RUN directive and add files with the ADD directive. The interface between a container and the rest of the world is one or more TCP ports. In the modern world, this is typically port 80 for HTTP. The final thing to specify is the ENTRYPOINT, which is what Docker will execute when you run the container. The Dockerfile we use for cgit looks like this:

FROM tianon/debian:sid

RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install -y lighttpd

ADD cgit /usr/bin/cgit
ADD cgit.css /var/www/cgit/cgit.css
ADD cgit.png /var/www/cgit/cgit.png

ADD lighttpd-cgit.conf /etc/lighttpd/lighttpd-cgit.conf

ADD cgitrc /etc/cgitrc

EXPOSE 80
ENTRYPOINT ["/usr/sbin/lighttpd", "-f", "/etc/lighttpd/lighttpd-cgit.conf", "-D"]

On my machine, this file lives in ~/Dockerfiles/cgit/Dockerfile. Right next to it in the cgit directory, I have the cgit-0.9.2 source and copied the files cgit, cgit.css and cgit.png out of the build tree. lighttpd-cgit.conf is fairly simple:

server.modules = (
	"mod_access",
	"mod_alias",
	"mod_redirect",
	"mod_cgi",
)

mimetype.assign = (
	".css" => "text/css",
	".png" => "image/png",
)

server.document-root = "/var/www/cgit/"

# Note that serving cgit under the /git location is not a requirement in
# general, but obligatory in my setup due to historical reasons.
url.redirect = (
	"^/$" => "/git"
)
alias.url = ( "/git" => "/usr/bin/cgit" )
cgi.assign = ( "/usr/bin/cgit" => "" )

Note that the only reason we compile cgit manually is because there is no Debian package for it yet (the compilation process is a bit… special). To actually build the container and tag it properly, run docker build -t="stapelberg/cgit" .:

home $ docker build -t="stapelberg/cgit" .
Uploading context 46786560 bytes
Step 1 : FROM tianon/debian:sid
 ---> 6bd626a5462b
Step 2 : RUN apt-get update
 ---> Using cache
 ---> 3702cc3eb5c9
Step 3 : RUN apt-get dist-upgrade -y
 ---> Using cache
 ---> 1fe67f64b1a9
Step 4 : RUN apt-get install -y lighttpd
 ---> Using cache
 ---> d955c6ff4a60
Step 5 : ADD cgit /usr/bin/cgit
 ---> e577c8c27dbf
Step 6 : ADD cgit.css /var/www/cgit/cgit.css
 ---> 156dbad760f4
Step 7 : ADD cgit.png /var/www/cgit/cgit.png
 ---> 05533fd04978
Step 8 : ADD lighttpd-cgit.conf /etc/lighttpd/lighttpd-cgit.conf
 ---> b592008d759b
Step 9 : ADD cgitrc /etc/cgitrc
 ---> 03a38cfd97f4
Step 10 : EXPOSE 80
 ---> Running in 24cea04396f2
 ---> de9ecca589c8
Step 11 : ENTRYPOINT ["/usr/sbin/lighttpd", "-f", "/etc/lighttpd/lighttpd-cgit.conf", "-D"]
 ---> Running in 6796a9932dd0
 ---> d971ba82cb0a
Successfully built d971ba82cb0a

Step 2: Pushing to the registry

Docker pulls images from the registry, a service that is by default provided by Docker, Inc. Since containers may include confidential configuration like passwords or other credentials, putting images into the public registry is not always a good idea. There are services like dockify.io and quay.io which offer a private registry, but you can also run your own. Note that when running your own, you are responsible for its availability. Be careful not to end up in a situation where you need to transfer docker containers to your new rigestry via your slow DSL connection. An alternative is to run your own registry, but store the files on Amazon S3, which also comes with additional cost.

Running your own registry in the default configuration (storing data only in the container’s /tmp directory, no authentication) is as easy as running:

d0 $ docker run -p 5000:5000 samalba/docker-registry

Then, you can tag and push the image we built in step 1:

docker tag stapelberg/cgit d0.zekjur.net:5000/cgit
docker push d0.zekjur.net:5000/cgit

Step 3: Running cgit on CoreOS

To simply run the cgit container we just created and pushed, use:

d0 $ docker run d0.zekjur.net:5000/cgit
2013-12-07 18:46:16: (log.c.166) server started

But that’s not very useful yet, port 80 is exposed by the docker container, but not provided to the outside world or any other docker container. You can use -p 80 to expose the container’s port 80 as a random port on the host, but for a more convenient test, let’s use port 4242:

d0 $ docker run -p 4242:80 d0.zekjur.net:5000/cgit

When connecting to http://d0.zekjur.net:4242/ with your browser, you should now see cgit. However, even if you specified git repositories in your cgitrc, those will not work, because there are no git repositories inside the container. The most reasonable way to make them available is to provide a volume, in this case read-only, to the container. Create /media/state/_CUSTOM/git and place your git repositories in there, then re-run the container:

d0 # mkdir -p /media/state/_CUSTOM/git
d0 # cd /media/state/_CUSTOM/git
d0 # git clone git://github.com/stapelberg/godebiancontrol
d0 $ docker run -v /media/state/_CUSTOM/git:/git:ro \
  -p 4242:80 d0.zekjur.net:5000/cgit

You should be able to see the repositories now in cgit. Now we should add a unit file to make sure this container comes up when the machine reboots:

d0 # cat >/media/state/units/cgit.service <<'EOT'
[Unit]
Description=cgit
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run \
    -v /media/state/_CUSTOM/git:/git:ro \
    -p 4242:80 \
    d0.zekjur.net:5000/cgit

[Install]
WantedBy=local.target
EOT
d0 # systemctl daemon-reload

Step 4: Running nginx in front of containers

You might have noticed that we did not expose cgit on port 80 directly. While there might be setups in which you have one public IP address per service, the majority of setups probably does not. Therefore, and for other benefits such as seamless updates (start the new cgit version on a separate port, test, redirect traffic by reloading nginx), we will deploy nginx as a reverse proxy for all the other containers.

The process to dockerize nginx is very similar to the one for cgit above, just with less manual compilation:

home $ mkdir -p ~/Dockerfiles/nginx
home $ cd ~/Dockerfiles/nginx
home $ cat >Dockerfile <<'EOT'
FROM tianon/debian:sid

RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install -y nginx-extras

EXPOSE 80
ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]
EOT
home $ docker build -t=stapelberg/nginx .
home $ docker tag stapelberg/nginx d0.zekjur.net:5000/nginx
home $ docker push d0.zekjur.net:5000/nginx

Now, instead of exposing cgit directly to the world, we bind its port 4242 onto the docker0 bridge interface, which can be accessed be all other containers, too:

d0 $ docker run -v /media/state/_CUSTOM/git:/git:ro \
  -p 172.17.42.1:4242:80 d0.zekjur.net:5000/cgit

I decided to not include the actual vhost configuration in the nginx container, but rather keep it in /media/state/_CUSTOM/nginx, so that it can be modified (perhaps automatically in the future) and nginx can be simply reloaded by sending it the SIGHUP signal:

d0 # mkdir -p /media/state/_CUSTOM/nginx
d0 # cat >/media/state/_CUSTOM/nginx/cgit <<'EOT'
server {
        root /usr/share/nginx/www;
        index index.html index.htm;

        server_name d0.zekjur.net;

        location / {
                proxy_pass http://172.17.42.1:4242/;
        }
}
EOT

As the final step, run the nginx container like that:

d0 $ docker run -v /media/state/_CUSTOM/nginx:/etc/nginx/sites-enabled:ro \
  -p 80:80 dock0.zekjur.net:5000/nginx

And finally, when just addressing d0.zekjur.net in your browser, you’ll be greeted by cgit!

Pain point: data replication

If you read this carefully, you surely have noticed that we actually have state that is stored on each docker node: the git repositories, living in /media/state/_CUSTOM/git. While git repositories are fairly easy to replicate and back up, with other applications this is much harder: imagine a typical trac installation, which needs a database and a couple of environment files where it stores attachment files and such.

Neither Docker nor CoreOS address this issue, it is one that you have to solve yourself. Options that come to mind are DRBD or rsync for less consistent, but perhaps easier replication. On the database side, there are plenty of solutions for PostgreSQL and MySQL.

Pain point: non-deterministic builds

With the Dockerfiles I used above, both the base image (tianon/debian) and what’s currently in Debian might change any second. An Dockerfile that I built just fine on my computer today may not work for you tomorrow. The result is that docker images are actually really important to keep around on reliable storage. I talked about the registry situation in step 2 already and there are other posts about the registry situation, too.

Pain point: read-write containers

What makes me personally a little nervous when dockerizing software is that containers are read-writable, and there is no way to run a container in read-only mode. This means that you need to get the volumes right, otherwise whatever data the software writes to disk is lost when you deploy a new docker container (or the old one dies). If read-only containers were possible, the software would not even be able to store any data, except when you properly set up a volume backed by persistent (and hopefully replicated/backed up) storage on the host.

I sent a Pull Request for read-only containers, but so far upstream does not seem very enthusiastic to merge it.

Pain point: CoreOS automatic reboots

At the time of writing, CoreOS automatically reboots after every update. Even if you have another level of load-balancing with health checks in front of your CoreOS machine, this means a brief service interruption. To work around it, use an inhibitor and reboot manually from time to time after properly failing over all services away from that machine.

Conclusion

Docker feels like a huge step in the right direction, but there definitely will still be quite a number of changes in all the details before we arrive at a solution that is tried and trusted.

Ideally, there will be plenty of providers who will run your Docker containers for not a lot of money, so that one can eventually get rid of dedicated servers entirely. Before that will be possible, we also need a solution to migrate Docker containers and their persistent state from one machine to another automatically and bring up Docker containers on new machines should a machine become unavailable.

by Michael Stapelberg at 07. December 2013 19:00:00

02. December 2013

ociru.net

Guerilla Puppeting

Recently, I had to do a deployment on a bunch of servers with no Puppet Agent installed. Unfortunately, adding a decent Puppet Infrastructure to the setup was out of discussion, so I started using mssh and had a little cry. You can imagine, that it didn’t took long to reach the point where mssh wasn’t flexible enough. And let’s face it – once you get used to Puppet everything else doesn’t feel right anymore. The solution to my plight was simple: just use Puppet without a Puppet Master.

You may already know how to use the apply feature, it’s well documented, but let’s take another look.

simple apply

Just run a simple manifest.

1
$ puppet apply masterless.pp

That’s the most simple way to run Puppet without a Puppet Master. masterless.pp could be a regular module or just some hacked code like this:

1
2
3
4
5
6
7
define feature ($foo) {
  # do things
}

feature{
  foo => bar,
}

use hiera

If your existing code needs values from the hiera database, that’s no show stopper at all. All you need is a copy of your hiera structure and the right parameter to hitch up.

1
$ puppet apply masterless.pp --hiera_config=./hiera.yaml

Please specifiy the location of your hiera config YAML at —hiera_config, not the node or module specific YAML (or JSON, or whatever backend you’re using).

Puppet Hashbang

Using apply from the command line is fun, but as we all are lazy with typing, why not pack the whole puppet apply to the Hashbang and +x‘ing the file?

1
2
3
4
5
6
7
8
#!/usr/bin/puppet apply --hiera_config=/tmp/hiera.yaml
define feature ($foo) {
  # do things
}

feature{
  foo => bar,
}

Achievement unlocked: Shell Scripting with Puppet. :–)

02. December 2013 02:05:00

24. November 2013

Brownie fudge pie | BBC Good Food

24. November 2013 22:59:07

Brownie fudge pie | BBC Good Food

24. November 2013 22:59:07

18. November 2013

sECuREs Webseite

sup-mail

sup-mail

sup-mail ist ein netter Mailclient für die Kommandozeile (benutzt curses), der besonders für Nutzer von Google Mail interessant sein dürfte. Die Philosophie ist, dass man generell mit Threads arbeitet und nicht mit einzelnen Mails (letztendlich ist eine einzelne Nachricht auch nur ein Spezialfall eines Threads). Weiterhin ist die Oberfläche sehr intelligent gestaltet, zeigt also die wichtigen Dinge an (den ersten Teil der letzten Nachricht eines Threads in der Übersicht, bei Nachrichten wird die Signatur und Zitate aus vorherigen Mails standardmäßig ausgeblendet).

Statt Nachrichten in Ordner einzusortieren, vergibt man in sup einfach Tags. Das Ordnerkonzept kann dadurch abgebildet werden, dass man einer Nachricht einfach genau einen Tag gibt. Man kann dann entweder nach Tags filtern oder direkt die Nachrichten durchsuchen. Beides dauert (bis zu einer gewissen Grenze an Nachrichten) nicht wirklich lange und ermuntert daher zum Vergeben von Tags.

UPDATE: sup ist tot. Es gibt keine engagierten Entwickler mehr und viele Nutzer sind auf einen anderen Client umgestiegen. Ich persönlich empfehle notmuch, eine C-implementation von einigen sup-Konzepten. Es gibt verschiedene Frontends für notmuch, das Standard-Frontend läuft in Emacs.

by Michael Stapelberg at 18. November 2013 20:49:29

13. November 2013

RaumZeitLabor

Mario Kart Turnier

Zieht die Winterreife auf, schmeißt die Sitzheizung an, lasst die Reifen glühen! Kurz bevor das Jahr in seine letzte Runde geht ist es wieder soweit: Das RaumZeitLabor sucht den neuen Mario-Kart-Champion! Wir werden in den Disziplinien »Super Mario Kart« und »Mario Kart 64« im K.O. bzw Doppel-K.O.-System den Gewinner ermitteln, der dann in unserem Wiki als Champion verewigt wird.

Das Turnier findet am 30.11. ab 19:30h statt.

Bitte meldet euch über die Kommentarfunktion an.

Als kleines Warm-Up gibt es hier das Finale des 5. Mario Kart Turniers zwischen else (Yoshi) und Jannik (Bowser).

by Cheatha at 13. November 2013 14:03:37

04. November 2013

RaumZeitLabor

Nähworkshops

Weihnachten rückt näher – und natürlich möchte man seinen Liebsten etwas schenken. Doch es sollte etwas besonderes, etwas persönliches sein.

Du hast eine Idee, weißt aber noch nicht, wie du sie umsetzen kannst? Im RaumZeitLabor gibt es alles, was du brauchst! Eine große Stickmaschine, Fäden in über 120 Farben, einen Schneidplotter, Flex-, Flock und Klebefolien, eine Shirtpresse, Nähmaschinen, unbedruckte Textilien in diversen Farben und Größen, …folien

 

Jeden Mittwoch ab 19 Uhr hast du im Rahmen der Workshopreihe für Nährdinen und Nährds die Möglichkeit, all diese Geräte zu nutzen und insbesondere Hilfe zu bekommen. Bei abweichenden Terminwünschen solltest du vorher klären, ob jemand da ist, der sich mit den Geräten auskennt.

Du weißt noch nicht so genau, was du als Anfänger überhaupt basteln kannst, möchtest aber mal alles ausprobieren? Die Workshops haben Themen, die kurz vorher im Terminkalender und auf der Mailingliste angekündigt werden. Sie bieten dir die Möglichkeit, unter Anleitung etwas individuelles an nur einem Abend zu kreieren, beispielsweise Kissen, Taschen und vieles mehr.

IMG_5445

by jiska at 04. November 2013 13:41:25