Planet NoName e.V.

24. June 2017

Insantity Industries

Persistent IRC

Hanging around in IRC is fun. However, when you don’t stay connected all the time (why do you shut down your computer anyways‌) you are missing out a lot of that fun! This is obviously an unbearable state, this needs immediate fixing!

Persistent IRC Setup

There are several ways of building a persistent IRC-setup. One is a so called bouncer that works like a proxy that mediates between you and the IRC network and buffers the messages. Another, simpler method is just running a IRC-client on a machine that is always powered on, like a server, that we can reconnect to to read our messages (and of course write new ones).

Requirements

  • A computer that is powered and connected to the internet 247
  • an account on said machine that is accessible via SSH
  • Software installed on said machine: tmux, weechat

Alternatively to tmux, screen can be used if you cannot get tmux installed on the machine you want to run this setup on, it works very similar. This post, however, will focus on the tmux-setup.

SSH

We can access our account via

ssh username@server.name

We should end up in a plain terminal prompt where we can type.

weechat

In this shell we can then start weechat, a fully keyboard-controlled terminal-IRC-client.

Adding a server

To add a server, type

/server add <servername> <serveraddress> -ssl -autoconnect

The options -ssl and -autoconnect are both optional. -ssl will enable encryption to the IRC network by default, -autoconnect will enable autoconnect in the server config, so that our weechat will automatically connect to that server when we start it.

<servername> will be the weechat-internal name for this server, <serveraddress> can also include a port by appending it via <serveraddress>/<port>.

Adding the freenode-network could therefore read

/server add freenode chat.freenode.net/6697 -ssl -autoconnect

Afterwards, we can connect to the freenode-network by issuing

/connect freenode

as the autoconnect only works upon starting of weechat. Alternatively, /quit weechat and start it again we should get autoconnected. And now, we are connected to the freenode-network!

Setting your name

By default, our nick will be set according to our username on the system. It can be changed via

/nick <newnickname>

To change it persistently, we can set the corresponding option in weechat via

/set irc.server.<servername>.nicks "demodude"

to a custom nickname. Generally, options in weechat can be set by /set <optionname> <value> or read by /set <optionname>. Weechat also supports globbing in the optionname, so getting all options for our added server can be done by

/set irc.server.<servername>.*

Joinging channels

Communication on IRC happens in channels. Each channel usually has a certain focus of stuff that happens there. We can then join channels via

/join #mytestchannel

which is a very boring channel as no one except for ChanServ and us is here, which you can see on the user list for this channel on the right. But it technically worked and we can now just type to post messages in this channel.

In the bar right above where we type we see a 2:#mytestchannel. Weechat has the concept of buffers. Each buffer represends one server or channel. The channel we are in is now buffer 2. To get back to our server-buffer, we can type /buffer 1 or hit F5, both will bring us back to buffer 1, which is our server-buffer. To get back to our channel-buffer, /buffer <buffernumber> or F6 will bring you there.

To enter channels on a server upon starting weechat, we can set the option irc.server.<servername>.autojoin, to a comma-separated list of channels "#channel1,#channel2,#channel3". To find all the channels on a IRC-server, we can issue a /list (for freenode, be aware, the list is HUGE).

We can save our changes via /save and exit weechat via /quit.

Scrolling

We can scroll backward and forward by using the PgUp- and PgDown-Keys. If we are at the very bottom of a buffer, weechat will automatically scroll down with incoming messages. If you have scrolled up, weechat will not follow as it assumes you want to read the part you scrolled to.

command recap

  • adding server: /server add <servername> <serveraddress>[/<port>] [-ssl] [-autoconnect]
  • connecting to a server: /connect <servername>
  • joining channels: /join <channelname>
  • switching buffers: /buffer <targetbuffer> or F5/F6
  • leaving channels: /close in channelbuffer
  • scrolling: PgUp, PgDown

We are up and running for IRC now! However, once we exit our weechat, we are no longer connected and missing out all the fun! So our weechat needs to run continuously.

Introducing‌

tmux

Usually, upon SSH-disconnect, all of our processes will be killed (including our weechat). This is different with tmux. Tmux allows to reattach to what we have done when last SSH-ing into our server.

So we exit our weechat and are back on our shell. There, we start

tmux

We now see a green bar at the bottom ouf our screen.

This is a tmux and a shell running inside of it.

We can now hit Ctrl+b to tell tmux to await a tmux-command and not forward our typing to the shell but instead interpret it as a command to tmux itself. We can then type d to detach and our shell is gone.

Afterwards, we can reattach to our tmux by running tmux attach and our shell is back! This also works when we detach and then log out of our machine, log in again and then reattach our tmux.

Now the only thing left is running a weechat inside of our tmux and we are done. We can detach (or just close the terminal, also works perfectly fine) and then reattach later to read what we have been missing out on. Our persistent IRC-setup is ready to rumble.

Improving our setup

Up to now we have a running setup for persistent IRC. However, the user-experience of this setup can be significantly improved.

Tab completion

Weechat is capable of tab completion, e.g. when using /set. However, by default, weechat autocompletes the first option it finds fully instead of to the first ., which is the configsection-delimiter in weechat.

To change this, we search for completion options via

/set *completion*

and afterwards we

/set weechat.completion.partial_completion_command on
/set weechat.completion.partial_completion_command_arg on
/set weechat.completion.partial_completion_completion_other on

Weechat plugins

Weechat is a highly extendable software. A full list of extensions can be found here, some of the most useful ones are listed in the following.

You can install all scripts directly from within weechat via

/script install <scriptname>
  • buffers.pl provides a visual list of open buffers on the left side of weechat (from weechat 1.8 onwards this can be replaced by weechat’s built-in buflist, which provides the same feature)
  • beep.pl triggers our terminal bell when we are highlighted, mentioned or queried
  • autojoin_on_invite.py does basically what the name says
  • screen_away.py will detect when we detach from our IRC-setup and report “I’m not here” to the other person if one is queried
  • autosort.py keeps your buffers sorted alphabetically, no matter how many you have open

For autosort.py you most likely also want to

/set buffers.look.indenting on
/set irc.look.server_buffer independent

autoconnecting tmux

To automate reattaching to tmux every time you SSH into the machine your persistent IRC-setup is running on, we can put

if [[ $TERM != 'screen' ]]
then
    tmux attach || tmux
fi

at the end of the file ~/.profile.

getting a shell besides your weechat

You can also open more shells in tmux. To do so, hit Ctrl+b and then ‘c’ for create. You will find another buffer (not the weechat-buffer, but the concept in tmux is equivalent) down in your tmux-bar.

The buffers read

<buffernumber>:terminaltitle

You can then switch between buffers via Crtl+b, <buffernumber>.

FUO (frequently used options)

A lot of IRC-network allow registration of usernames to ensure that we can reuse our nick and no one else grabs it. If we have done that, Weechat can automatically identify us upon connecting. To do so, we just need to set the password we chose when registering in the option

irc.server.<servername>.password

However, we just need to be aware that weechat saves that password in plaintext in the configuration.

Weechat can also trigger arbitrary commands when connecting to a server. This is useful for things like self-invites into invite-only-channels or other things that you want to trigger. To use this, we just need to set

irc.server.<servername>.command

to a semicolon-separated list of commands as you would issue them manually in weechat.

by Jonas Große Sundrup at 24. June 2017 00:00:00

22. June 2017

Insantity Industries

Hardwaretriggered Backups

No one wants to do backups, but everyone wants restore once something went wrong. The solution we look for is fully automated backups that trigger once you plug-in your backupdrive (or it appears $somehow).

So our objective is to detect the plugging-in of a USB-stick or harddisk and trigger a backupscript.

udev

The solution is: udev. udev is the solution in most Linux-Distributions responsible for handling hotplug-events.

Those events can be monitored via udevadm monitor. When then plugging in a device, we see events like

KERNEL[1108.237335] add      /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2 (usb)
KERNEL [1108.778873] add    /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6 (scsi)

for different subsystems (USB-subsystem and SCSI-subsystem in this case) which we can hook into and trigger arbitrary commands.

writing udev-rules

To identify a device, we need to take a look at its properties. For a harddisk this can be done by

udevadm info -a -p $(udevadm info -q path -n $devicename)

where $devicename can be sdb or /dev/sdb, for example. udevadm info -q path -n $devicename gives us the indentifier in the /sys-subsystem we already have seen in the output of udevadm monitor, udevadm info -a -p $devicepath uses this identifier to walk through the /sys-subsystem and gives us all the properties that are associated with the device or its parent nodes, for example my thumbdrive’s occurrence in the scsi-subsystem:

KERNELS=="6:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
ATTRS{device_blocked}=="0"
ATTRS{device_busy}=="0"
ATTRS{dh_state}=="detached"
ATTRS{eh_timeout}=="10"
ATTRS{evt_capacity_change_reported}=="0"
ATTRS{evt_inquiry_change_reported}=="0"
ATTRS{evt_lun_change_reported}=="0"
ATTRS{evt_media_change}=="0"
ATTRS{evt_mode_parameter_change_reported}=="0"
ATTRS{evt_soft_threshold_reached}=="0"
ATTRS{inquiry}==""
ATTRS{iocounterbits}=="32"
ATTRS{iodone_cnt}=="0x117"
ATTRS{ioerr_cnt}=="0x2"
ATTRS{iorequest_cnt}=="0x117"
ATTRS{max_sectors}=="240"
ATTRS{model}=="DataTraveler 2.0"
ATTRS{queue_depth}=="1"
ATTRS{queue_type}=="none"
ATTRS{rev}=="PMAP"
ATTRS{scsi_level}=="7"
ATTRS{state}=="running"
ATTRS{timeout}=="30"
ATTRS{type}=="0"
ATTRS{vendor}=="Kingston"

We can use those attributes to identify our device amongst all the devices that we have.

A udev-rule now contains two major component types:

  • matching statements that identify the event and device we want to match
  • action statements that take action on the matched device

which is of course a simplification, but it will suffice for the purpose of this blogpost and most standard applications.

To do so we first create a file /etc/udev/rules.d/myfirstudevrule.rules. The filename doesn’t matter as long as it ends in .rules as only those files will be read by udev.

In this udev-rule, we first need to match our device. To do so, I will pick three of the properties above that sound like they are sufficient to uniquely match my thumbdrive.

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston"

I have added a statement matching the ACTION, as we of course only want to trigger a backup when the device appears. You can also match entire device classes by choosing the matcher-properties accordingly.

To trigger a command, we can add it to the list of commands RUN that shall run when the device is inserted, for example creating the file /tmp/itsalive:

RUN+="/usr/bin/touch /tmp/itsalive"

So our entire (rather lengthy) udev-rule in /etc/udev/rules.d/myfirstudevrule.rules reads

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston", RUN+="/usr/bin/touch /tmp/itsalive"

and we can trigger arbitrary commands with it

trigger long runnig jobs

However, commands in the RUN-list have a time constraint of 180 seconds to run. For a backup, this is likely to be be insufficient. So we need a way to create long-running jobs from udev.

The solution for this is to outsource the command into a systemd-unit. Besides being able to run for longer than 180s, that way we also get proper logging of our backup-command in the journal, which is always good to have.

So we create a file /etc/systemd/system/mybackup.service containing

[Unit]
Description=backup system

[Service]
User=backupuser  # optional, might be yourself, root, ‌
Type=simple
ExecStart=/path/to/backupscript.sh

We then need to modify the action part of our rule from appending to RUN to read

TAG+="systemd", ENV{SYSTEMD_WANTS}="mybackup.service"

Our entire udev-rule then reads

SUBSYSTEMS=="scsi", ACTION=="add" ATTRS{model}=="DataTraveler 2.0", ATTRS{vendor}=="Kingston", TAG+="systemd", ENV{SYSTEMD_WANTS}="mybackup.service"

improve usability

To further improve usability, we can additionally append.

SYMLINK+="backupdisk"

This way, an additional symlink /dev/backupdisk will be created upon the device appearing that can be used for scripting.

using a disk in a dockingstation

At home I have a dockingstation for my laptop and the disk I use for local backups is built into the bay of the dockingstation. From my computer’s point of view, this disk is connected over an internal port. When docking onto the station it is not recognized, as internal ports are not monitored for hotplugging. Upon docking, it is therefore necessary to rescan the internal scsi-bus the disk is connected to. This can be done issuing an entire rescan of the scsi-host host1 by

echo '- - -' > /sys/class/scsi_host/host1/scan

In my case the disk is connected to the port at scsi-host 1. You can find out the scsi-host of your disk by running

ls -l /sys/block/sdb

where sda is the device mapper of the block device whose scsi-host you are interested in. This returns a string like

../devices/pci0000:00/0000:00:1f.2/ata1/host1/target0:0:0/0:0:0:0/block/sda

where you can see that for this disk, the corresponding scsi-host is host1. This can then be used to issue the rescan of the correct scsi-bus. The according udev-rule in my case reads

SUBSYSTEM=="platform", ACTION=="change", ATTR{type}=="ata_bay", RUN+="/usr/bin/echo '- - -' > /sys/class/scsi_host/host1/scan"

Afterwards, the disk should show up and can be matched like any other disk as described above.

by Jonas Große Sundrup at 22. June 2017 00:00:00

18. June 2017

Mero’s Blog

How to not use an http-router in go

If you don't write web-thingies in go you can stop reading now. Also, I am somewhat snarky in this article. I intend that to be humorous but am probably failing. Sorry for that

As everyone™ knows, people need to stop writing routers/muxs in go. Some people attribute the abundance of routers to the fact that the net/http package fails to provide a sufficiently powerful router, so people roll their own. This is also reflected in this post, in which a gopher complains about how complex and hard to maintain it would be to route requests using net/http alone.

I disagree with both of these. I don't believe the problem is a lack of a powerful enough router in the stdlib. I also disagree that routing based purely on net/http has to be complicated or hard to maintain.

However, I do believe that the community currently lacks good guidance on how to properly route requests using net/http. The default result seems to be that people assume they are supposed to use http.ServeMux and get frustrated by it. In this post I want to explain why routers in general - including http.ServeMux - should be avoided and what I consider simple, maintainable and scalable routing using nothing but the stdlib.

But why?

But why?

Why do I believe that routers should not be used? I have three arguments for that: They need to be very complex to be useful, they introduce strong coupling and they make it hard to understand how requests are flowing.

The basic idea of a router/mux is, that you have a single component which looks at a request and decides what handler to dispatch it to. In your func main() you then create your router, you define all your routes with all your handlers and then you call Serve(l, router) and everything's peachy.

But since URLs can encode a lot of important information to base your routing decisions on, doing it this way requires a lot of extra features. The stdlib ServeMux is an incredibly simple router but even that contains a certain amount of magic in its routing decisions; depending on whether a pattern contains a trailing slash or not it might either be matched as a prefix or as a complete URL and longer patterns take precedence over shorter ones and oh my. But the stdlib router isn't even powerful enough. Many people need to match URLs like "/articles/{category}/{id:[0-9]+}" in their router and while we're at it also extract those nifty arguments. So they're using gorilla/mux instead. An awful lot of code to route requests.

Now, without cheating (and actually knowing that package counts as cheating), tell me for each of these requests:

  • GET /foo
  • GET /foo/bar
  • GET /foo/baz
  • POST /foo
  • PUT /foo
  • PUT /foo/bar
  • POST /foo/123

What handler they map to and what status code do they return ("OK"? "Bad Request"? "Not Found"? "Method not allowed"?) in this routing setup?

r := mux.NewRouter()
r.PathPrefix("/foo").Methods("GET").HandlerFunc(Foo)
r.PathPrefix("/foo/bar").Methods("GET").HandlerFunc(FooBar)
r.PathPrefix("/foo/{user:[a-z]+}").Methods("GET").HandlerFunc(FooUser)
r.PathPrefix("/foo").Methods("POST").HandlerFunc(PostFoo)

What if you permute the lines in the routing-setup?

You might guess correctly. You might not. There are multiple sane routing strategies that you could base your guess on. The routes might be tried in source order. The routes might be tried in order of specificity. Or a complicated mixture of all of them. The router might realize that it could match a Route if the method were different and return a 405. Or it might not not. Or that /foo/123 is, technically, an illegal argument, not a missing page. I couldn't really find a good answer to any of these questions in the documentation of gorilla/mux for what it's worth. Which meant that when my web app suddenly didn't route requests correctly, I was stumped and needed to dive into code.

You could say that people just have to learn how gorilla/mux decides it's routing (I believe it's "as defined in source order", by the way). But there are at least fifteen thousand routers for go and no newcomer to your application will ever know all of them. When a request does the wrong thing, I don't want to have to debug your router first to find out what handler it is actually going to and then debug that handler. I want to be able to follow the request through your code, even if I have next to zero familiarity with it.

Lastly, this kind of setup requires that all the routing decisions for your application are done in a central place. That introduces edit-contention, it introduces strong coupling (the router needs to be aware of all the paths and packages needed in the whole application) and it becomes unmaintainable after a while. You can alleviate that by delegating to subrouters though; which really is the basis of how I prefer to do all of this these days.

How to use the stdlib to route

Let's build the toy example from this medium post. It's not terribly complicated but it serves nicely to illustrate the general idea. The author intended to show that using the stdlib for routing would be too complicated and wouldn't scale. But my thesis is that the issue is that they are effectively trying to write a router. They are trying to encapsulate all the routing decisions into one single component. Instead, separate concerns and make small, easily understandable routing decisions locally.

Remember how I told you that we're going to use only the stdlib for routing?

Those where lies, plain and simple

We are going to use this one helper function:

// ShiftPath splits off the first component of p, which will be cleaned of
// relative components before processing. head will never contain a slash and
// tail will always be a rooted path without trailing slash.
func ShiftPath(p string) (head, tail string) {
    p = path.Clean("/" + p)
    i := strings.Index(p[1:], "/") + 1
    if i <= 0 {
        return p[1:], "/"
    }
    return p[1:i], p[i:]
}

Let's build our app. We start by defining a handler type. The premise of this approach is that handlers are strictly separated in their concerns. They either correctly handle a request with the correct status code or they delegate to another handler which will do that. They only need to know about the immediate handlers they delegate to and they only need to know about the sub-path they are rooted at:

type App struct {
    // We could use http.Handler as a type here; using the specific type has
    // the advantage that static analysis tools can link directly from
    // h.UserHandler.ServeHTTP to the correct definition. The disadvantage is
    // that we have slightly stronger coupling. Do the tradeoff yourself.
    UserHandler *UserHandler
}

func (h *App) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    if head == "user" {
        h.UserHandler.ServeHTTP(res, req)
        return
    }
    http.Error(res, "Not Found", http.StatusNotFound)
}

type UserHandler struct {
}

func (h *UserHandler) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    id, err := strconv.Atoi(head)
    if err != nil {
        http.Error(res, fmt.Sprintf("Invalid user id %q", head), http.StatusBadRequest)
        return
    }
    switch req.Method {
    case "GET":
        h.handleGet(id)
    case "PUT":
        h.handlePut(id)
    default:
        http.Error(res, "Only GET and PUT are allowed", http.StatusMethodNotAllowed)
    }
}

func main() {
    a := &App{
        UserHandler: new(UserHandler),
    }
    http.ListenAndServe(":8000", a)
}

This seems very simple to me (not necessarily in "lines of code" but definitely in "understandability"). You don't need to know anything about any routers. If you want to understand how the request is routed you start by looking at main. You see that (*App).ServeHTTP is used to serve any request so you :GoDef to its definition. You see that it decides to dispatch to UserHandler, you go to its ServeHTTP method and you see directly how it parses the URL and what the decisions are that it made on its base.

We still need to add some patterns to our application. Let's add a profile handler:

type UserHandler struct{
    ProfileHandler *ProfileHandler
}

func (h *UserHandler) ServeHTTP(res http.ResponseWriter, req *http.Request) {
    var head string
    head, req.URL.Path = ShiftPath(req.URL.Path)
    id, err := strconv.Atoi(head)
    if err != nil {
        http.Error(res, fmt.Sprintf("Invalid user id %q", head), http.StatusBadRequest)
        return
    }

    if req.URL.Path != "/" {
        head, tail := ShiftPath(req.URL.Path)
        switch head {
        case "profile":
            // We can't just make ProfileHandler an http.Handler; it needs the
            // user id. Let's instead…
            h.ProfileHandler.Handler(id).ServeHTTP(res, req)
        case "account":
            // Left as an exercise to the reader.
        default:
            http.Error(res, "Not Found", http.StatusNotFound)
        }
        return
    }
    // As before
    ...
}

type ProfileHandler struct {
}

func (h *ProfileHandler) Handler(id int) http.Handler {
    return http.HandlerFunc(func(res http.ResponseWriter, req *http.Request) {
        // Do whatever
    })
}

This may, again, seem complicated but it has the cool advantage that the dependencies of ProfileHandler are clear at compile time. It needs a user id which needs to come from somewhere. Providing it via this kind of method ensures this is the case. When you refactor your code, you won't accidentally forget to provide it; it's impossible to miss!

There are two potential alternatives to this if you prefer them: You could put the user-id into req.Context() or you could be super-hackish and add them to req.Form. But I prefer it this way.

You might argue that App still needs to know all the transitive dependencies (because they are members, transitively) so we haven't actually reduced coupling. But that's not true. Its UserHandler could be created by a NewUserHandler function which gets passed its dependencies via the mechanism of your choice (flags, dependency injection,…) and gets wired up in main. All App needs to know is the API of the handlers it's directly invoking.

Conclusion

I hope I convinced you that routers in and off itself are harmful. Pulling the routing into one component means that that component needs to encapsulate an awful lot of complexity, making it hard to debug. And as no single existing router will contain all the complicated cleverness you want to base your routing decisions on, you are tempted to write your own. Which everyone does.

Instead, split your routing decisions into small, independent chunks and express them in their own handlers. And wire the dependencies up at compile time, using the type system of go, and reduce coupling.

18. June 2017 22:57:21

25. May 2017

sECuREs Webseite

Auto-opening portal pages with NetworkManager

Modern desktop environments like GNOME offer UI for this, but if you’re using a more bare-bones window manager, you’re on your own. This article outlines how to get a login page opened in your browser when you’re behind a portal.

If your distribution does not automatically enable it (Fedora does, Debian doesn’t), you’ll first need to enable connectivity checking in NetworkManager:

# cat >> /etc/NetworkManager/NetworkManager.conf <<'EOT'
[connectivity]
uri=http://network-test.debian.org/nm
EOT

Then, add a dispatcher hook which will open a browser when NetworkManager detects you’re behind a portal. Note that the username must be hard-coded because the hook runs as root, so this hook will not work as-is on multi-user systems. The URL I’m using is an always-http URL, also used by Android (I expect it to be somewhat stable). Portals will redirect you to their login page when you open this URL.

# cat > /etc/NetworkManager/dispatcher.d/99portal <<EOT
#!/bin/bash

[ "$CONNECTIVITY_STATE" = "PORTAL" ] || exit 0

USER=michael
USERHOME=$(eval echo "~$USER")
export XAUTHORITY="$USERHOME/.Xauthority"
export DISPLAY=":0"
su $USER -c "x-www-browser http://www.gstatic.com/generate_204"
EOT

by Michael Stapelberg at 25. May 2017 09:37:17

26. April 2017

mfhs Blog

look4bas: Small script to search for Gaussian basis sets

In the past couple of days I hacked together a small Python script, which searches through EMSL basis set exchange library of Gaussian basis sets on the commandline.

Unlike the webinterface it allows to use grep-like regular expressions to search the names and descriptions of the basis sets. Of course limiting the selection to those basis sets which have definitions for a specific subset of elements is possible as well. All matching basis sets can be downloaded at once. Right now only downloading the basis set files in Gaussian94 format is implemented, however.

The code, further information and some examples can be found on https://github.com/mfherbst/look4bas.

by mfh at 26. April 2017 00:00:21

24. April 2017

RaumZeitLabor

Exkursion in den Luxor Filmpalast Bensheim

An nur wenigen Orten liegen Technik und Popkultur so dicht zusammen wie im Kino. Und das gilt ganz besonders für den Luxor Filmpalast Bensheim. Neben einem komplett im Star Wars Design gehaltenen Kinosaal, gibt es dort auch ein Haifischbecken und verschiedenste popkulturelle Exponate, zum Beispiel einen im Back-to-the-Future Stil umgebauten DeLorean, zu sehen.

Grund genug für uns, sich das mal genauer anzusehen.

Wir werden uns am Samstag, den 20. Mai, um 16:00 Uhr im Eingangsbereich des Luxor Filmpalast Bensheim treffen und haben dann Luxor 7 ganz für uns allein. Dabei handelt es sich um den oben erwähnten Kinosaal im Star Wars Look, der außerdem noch mit einem eigenen Loungebereich versehen ist.

Um 16:30 Uhr machen wir dann das, was man in einem Kino normalerweise so tut: Wir werden einen Film schauen. Exklusiv für uns wird es eine Sondervorführung von „Hackers“, dem Meisterwerk aus dem Jahr 1995, geben. HACK THE PLANET!

Nach dem Film bekommen wir von einem technisch versierten Mitarbeiter eine Führung hinter die Kulissen des Kinos und haben, als besonderes Extra, die Gelegenheit das inoffizielle „Museum“, das sich ebenfalls im Kinogebäude befindet, zu besichtigen. Dabei handelt es sich um eine umfangreiche Privatsammlung an Actionfiguren und anderen Merchandiseartikeln, die im Lauf von 35 Jahren zusammengestellt wurde.

Die Veranstaltung wird voraussichtlich gegen 19:30 Uhr enden.

Die Teilnahme an der Exkursion kostet 25€ pro Person. Wie üblich ist die Mitgliedschaft im RaumZeitLabor e.V. keine Voraussetzung um an der Veranstaltung teilnehmen zu können.

Die maximale Teilnehmerzahl ist beschränkt, daher ist eine verbindliche Anmeldung erforderlich. Um euch anzumelden schreibt mir eine Mail mit dem Betreff „Kino Exkursion“.

Die Teilnehmer an der Exkursion sollten sich, wie immer, untereinander abstimmen und möglichst Fahrgemeinschaften bilden. Weitere Informationen zur Exkursion gehen zu gegebener Zeit per Mail an die angemeldeten Teilnehmer.

by blabber at 24. April 2017 00:00:00

16. April 2017

sECuREs Webseite

HomeMatic re-implementation

A while ago, I got myself a bunch of HomeMatic home automation gear (valve drives, temperature and humidity sensors, power switches). The gear itself works reasonably well, but I found the management software painfully lacking. Hence, I re-implemented my own management software. In this article, I’ll describe my method, in the hope that others will pick up a few nifty tricks to make future re-implementation projects easier.

Motivation

When buying my HomeMatic devices, I decided to use the wide-spread HomeMatic Central Control Unit (CCU2). This embedded device runs the proprietary rfd wireless daemon, which offers an XML-RPC interface, used by the web interface.

I find the CCU2’s web interface really unpleasant. It doesn’t look modern, it takes ages to load, and it doesn’t indicate progress. I frequently find myself clicking on a button, only to realize that my previous click was still not processed entirely, and then my current click ends up on a different element that I intended to click. Ugh.

More importantly, even if you avoid the CCU2’s web interface altogether and only want to extract sensor values, you’ll come to realize that the device crashes every few weeks. Due to memory pressure, the rfd is killed and doesn’t come back. As a band-aid, I wrote a watchdog cronjob which would just reboot the device. I also reported the bug to the vendor, but never got a reply.

When I tried to update the software to a more recent version, things went so wrong that I decided to downgrade and not touch the device anymore. This is not a good state to be in, so eventually I started my project to replace the device entirely. The replacement is hmgo, a central control unit implemented in Go, deployed to a Raspberry Pi running gokrazy. The radio module I’m using is HomeMatic’s HM-MOD-RPI-PCB, which is connected to a serial port, much like in the CCU2 itself.

Preparation: gather and visualize traces

In order to compare the behavior of the CCU2 stock firmware against my software, I wanted to capture some traces. Looking at what goes on over the air (or on the wire) is also a good learning opportunity to understand the protocol.

  1. I wrote a Wireshark dissector (see contrib/homematic.lua). It is a quick & dirty hack, does not properly dissect everything, but it works for the majority of packets. This step alone will make the upcoming work so much easier, because you won’t need to decode packets in your head (and make mistakes!) so often.
  2. I captured traffic from the working system. Conveniently, the CCU2 allows SSH'ing in as root after setting a password. Once logged in, I used lsof and ls /proc/$(pidof rfd)/fd to identify the file descriptors which rfd uses to talk to the serial port. Then, I used strace -e read=7,write=7 -f -p $(pidof rfd) to get hex dumps of each read/write. These hex dumps can directly be fed into text2pcap and can be analyzed with Wireshark.
  3. I also wrote a little Perl script to extract and convert packet hex dumps from homegear debug logs to text2pcap-compatible format. More on that in a bit.

Preparation: research

Then, I gathered as much material as possible. I found and ended up using the following resources (in order of frequency):

  1. homegear source
  2. FHEM source
  3. homegear presentation
  4. hmcfgusb source
  5. FHEM wiki

Preparation: lab setup

Next, I got the hardware to work with a known-good software. I set up homegear on a Raspberry Pi, which took a few hours of compilation time because there were no pre-built Debian stretch arm64 binaries. This step established that the hardware itself was working fine.

Also, I got myself another set of traces from homegear, which is always useful.

Implementation

Now the actual implementation can begin. Note that up until this point, I hadn’t written a single line of actual program code. I defined a few milestones which I wanted to reach:

  1. Talk to the serial port.
  2. Successfully initialize the HM-MOD-RPI-PCB
  3. Receive any BidCoS broadcast packet
  4. Decode any BidCoS broadcast packet (can largely be done in a unit test)
  5. Talk to an already-paired device (re-using the address/key from my homegear setup)
  6. Configure an already-paired device
  7. Pair a device

To make the implementation process more convenient, I changed the compilation command of my editor to cross-compile the program, scp it to the Raspberry Pi and run it there. This allowed me to test my code with one keyboard shortcut, and I love quick feedback.

Retrospective

The entire project took a few weeks of my spare time. If I had taken some time off of work, I’m confident I could have implemented it in about a week of full-time work.

Consciously doing research, preparation and milestone planning was helpful. It gave me good sense of my progress and achievable goals.

As I’ve learnt previously, investing in tools pays off quickly, even for one-off projects like this one. I’d recommend everyone who’s doing protocol-related work to invest some time in learning to use Wireshark and writing custom Wireshark dissectors.

by Michael Stapelberg at 16. April 2017 10:20:00

25. March 2017

sECuREs Webseite

Review: Turris Omnia (with Fiber7)

The Turris Omnia is an open source (an OpenWrt fork) open hardware internet router created and supported by nic.cz, the registry for the Czech Republic. It’s the successor to their Project Turris, but with better specs.

I was made aware of the Turris Omnia while it was being crowd-funded on Indiegogo and decided to support the cause. I’ve been using OpenWrt on my wireless infrastructure for many years now, and finding a decent router with enough RAM and storage for the occasional experiment used to not be an easy task. As a result, I had been using a very stable but also very old tp-link WDR4300 for over 4 years.

For the last 2 years, I had been using a Ubiquiti EdgeRouter Lite (Erlite-3) with a tp-link MC220L media converter and the aforementioned tp-link WDR4300 access point. Back then, that was one of the few setups which delivered 1 Gigabit in passively cooled (quiet!) devices running open source software.

With its hardware specs, the Turris Omnia promised to be a big upgrade over my old setup: the project pages described the router to be capable of processing 1 Gigabit, equipped with a 802.11ac WiFi card and having an SFP slot for the fiber transceiver I use to get online. Without sacrificing performance, the Turris Omnia would replace 3 devices (media converter, router, WiFi access point), which yields nice space and power savings.

Performance

Wired performance

As expected, the Turris Omnia delivers a full Gigabit. A typical speedtest.net result is 2ms ping, 935 Mbps down, 936 Mbps up. Speeds displayed by wget and other tools max out at the same values as with the Ubiquiti EdgeRouter Lite. Latency to well-connected targets such as Google remains at 0.7ms.

WiFi performance

I did a few quick tests on speedtest.net with the devices I had available, and here are the results:

Client Down (WDR4300) Down (Omnia) Up
ThinkPad X1 Carbon 2015 35 Mbps 470 Mbps 143 Mbps
MacBook Pro 13" Retina 2014 127 Mbps 540 Mbps 270 Mbps
iPhone SE 226 Mbps 227 Mbps

Compatibility (software/setup)

OpenWrt’s default setup at the time when I set up this router was the most pleasant surprise of all: using the Turris Omnia with fiber7 is literally Plug & Play. After opening the router’s wizard page in your web browser, you literally need to click “Next” a few times and you’re online with IPv4 and IPv6 configured in a way that will be good enough for many people.

I realize this is due to Fiber7 using “just” DHCPv4 and DHCPv6 without requiring credentials, but man is this nice to see. Open source/hardware devices which just work out of the box are not something I’m used to :-).

One thing I ended up changing, though: in the default setup (at the time when I tried it), hostnames sent to the DHCP server would not automatically resolve locally via DNS. I.e., I could not use ping beast without any further setup to send ping probes to my gaming computer. To fix that, for now one needs to disable KnotDNS in favor of dnsmasq’s built-in DNS resolver. This will leave you without KnotDNS’s DNSSEC support. But I prefer ease of use in this particular trade-off.

Compatibility (hardware)

Unfortunately, the SFPs which Fiber7 sells/requires are not immediately compatible with the Turris Omnia. If I understand correctly, the issue is related to speed negotiation.

After months of discussion in the Turris forum and not much success on fixing the issue, Fiber7 now offers to disable speed negotiation on your port if you send them an email. Afterwards, your SFPs will work in media converters such as the tp-link MC220L and the Turris Omnia.

The downside is that debugging issues with your port becomes harder, as Fiber7 will no longer be able to see whether your device correctly negotiates speed, the link will just always be forced to “up”.

Updates

The Turris Omnia’s automated updates are a big differentiator: without having to do anything, the Turris Omnia will install new software versions automatically. This feature alone will likely improve your home network’s security and this feature alone justifies buying the router in my eyes.

Of course, automated upgrades constitute a certain risk: if the new software version or the upgrade process has a bug, things might break. This happened once to me in the 6 months that I have been using this router. I still haven’t seen a statement from the Turris developers about this particular breakage — I wish they would communicate more.

Since you can easily restore your configuration from a backup, I’m not too worried about this. In case you’re travelling and really need to access your devices at home, I would recommend to temporarily disable the automated upgrades, though.

Product Excellence

One feature I love is that the brightness of the LEDs can be controlled, to the point where you can turn them off entirely. It sounds trivial, but the result is that I don’t have to apply tape to this device to dim its LEDs. To not disturb watching movies, playing games or having guests crash on the living room couch, I can turn the LEDs off and only turn them back on when I actually need to look at them for something — in practice, that’s never, because the router just works.

Recovering the software after horribly messing up an experiment is pretty easy: when holding the reset button for a certain number of seconds, the device enters a mode where a new firmware file is flashed to the device from a plugged-in USB memory stick. What’s really nice is that the mode is indicated by the color of the LEDs, saving you other device’s tedious counting which I tend to always start at the wrong second. This is a very good compromise between saving cost and pleasing developers.

The Turris Omnia has a serial port readily accessible via a pin header that’s reachable after opening the device. I definitely expected an easily accessible serial port in a device which targets open source/hardware enthusiasts. In fact, I have two ideas to make that serial port even better:

  1. Label the pins on the board — that doesn’t cost a cent more and spares some annoying googling for a page which isn’t highly ranked in the search results. Sparing some googling is a good move for an internet router: chances are that accessing the internet will be inconvenient while you’re debugging your router.
  2. Expose the serial port via USB-C. The HP 2530-48G switch does this: you don’t have to connect a USB2serial to a pin header yourself, rather you just plug in a USB cable which you’ll probably carry anyway. Super convenient!

Conclusion

tl;dr: if you can afford it, buy it!

I’m very satisfied with the Turris Omnia. I like how it is both open source and open hardware. I rarely want to do development with my main internet router, but when I do, the Turris Omnia makes it pleasant. The performance is as good as advertised, and I have not noticed any stability problems with neither the router itself nor the WiFi.

I outlined above how the next revision of the router could be made ever so slightly more perfect, and I described the issues I ran into (SFP compatibility and an update breaking my non-standard setup). If these aren’t deal-breakers to you, which sounds unlikely, you should definitely consider the Turris Omnia!

by Michael Stapelberg at 25. March 2017 09:40:00

25. February 2017

RaumZeitLabor

[Ausflugszeit] RZL trifft NTM

NTM

WERTHER RaumZeitLaboranten!

Es gibt KAIN Entkommen von den AusflĂźgen! Der ABSCHIEDSWALZER von der letzten Exkursion ist noch nicht so lange ausgetanzt, da geht es auch schon wieder rund.

Dank der MACHT DES SCHICKSALS steht als nächstes Ausflugsziel das Nationaltheater Mannheim auf dem Plan.

Am Samstag, den 18. März 2017, haben wir – DONNERWETTER - GANZ FAMOS – ab 15.00 Uhr die MÜglichkeit einen Blick hinter die Kulissen zu werfen. Wer schon immer etwas mehr ßber die Haus- und LICHTtechnik sowie das Audio- und Videosystem des Nationaltheaters erfahren wollte, sollte sich das Datum mit einem KREIDEKREIS im Kalender markieren.

Falls ihr teilnehmen wollt, noch CARMEN oder OTHELLO mitbringen mĂśchtet, SO WIRD’S GEMACHT: Schreibt mir bitte bis 10. März 2017, 13.37 Uhr eine Mail mit den Namen aller Mitkommenden (Bitte schreibt auch dazu, wenn ihr SchĂźler oder BETTELSTUDENT seid!) [đŸŽ­].

Damit nicht TAUSEND SĂœSSE BEINCHEN durchs Theater trampeln, ist die Teilnehmerzahl auf 15 begrenzt.

Der Eintritt ist fĂźr euch kostenlos - damit auch der DER GEIZIGE RITTER mitkommen kann!

Es grĂźĂŸt
DIE FLEDERMAUS


[đŸŽ­] Keine Mail, kein NTM!

by flederrattie at 25. February 2017 00:00:00

19. February 2017

Insantity Industries

The beauty of Archlinux

Recently I have been asked (again) why to use Arch Linux when choosing a linux distribution. Because my fellow inhabitants of the IRC channel affected are always very pleased by the wall of text awaiting them after such a question when they return to the keyboard, I decided to write down my thoughts on that matter so that in the future I can simply link to this post.

I use Archlinux as my main distribution and will lay out why I personally like it. Afterwards I will also line out why you might not want to use Arch Linux. This is merely a collection of arguments and is certainly neither exhaustive nor do I claim to be unbiased in this matter, as I personally like the distro a lot. Ultimately you will have to decide on your own which distribution is best suited for you.

This post is about the specifics of Arch Linux in the family of Linux distributions. It is not about comparing Linux to Windows or OSX or other operating system families. But enough of the chitchat, let’s get to the matter at hand:

Why to use Arch Linux

Simplicity

Arch is developed under the KISS-principle (“Keep It Simple, Stupid”). This does not mean that Arch is necessarily easy but the system and its inherent structure is simple in many various aspects:

Few abstraction layers

Arch generally has very few abstraction layers. Instead of having a tool to autoconfigure things for you, for example, configuration is usually done manually in Arch. This does not mean that something like Network Manager is not available in the package repos, but that the basic system configuration is usually done without additional tooling by editing the respective configuration files. This means that one has a good feeling for what’s going on in one’s own system and how it is structured, because one has made those configurations oneself.

This also includes no automagical merging or changes when an update requires a change in a configuration file. The way it is handled in Arch is that pacman, the distribution’s packacke manager notices if a configfile was changed from the packet’s default and will then install the changed version in the packacke with the extension .pacnew while telling you about this so that you can merge those files on your own after finishing all the upgrades.

Easy, yet powerful system to build packages

Speaking of packages, it is very easy to build packages in Arch. You simply write your shell commands that you would use to compile/install/whatever the software you want to use into a special function in a so called PKGBUILD-file, which is effectively something like a buildscript, call makepkg and you end up with a package to install your software package-manager-controlled. The packet itself is nothing more than a mirroring of the file system tree relevant for that package plus a little metadata for the packet manager.

It is also very simple to rebuild packages from the official repositories, e.g. if one wants to have features compiled in that the default configuration in Arch has disabled (think of a kernel with custom options). The Arch Building System, ABS for short, provides you with all the sources you need for that purpose. Edit one or two simple files and you are ready to go.

No unnecessary packet splitting

Software in Arch is usually packaged as it is organized by the upstream-developers. This means that if upstream provides one software, this software will end up being one package in Arch and is not splitted into multiple packages, like for splitting out plugins or data files or something like that. This reduces the amount of packages present and increases overview.

Getting rid of old

If a technology used in Arch turns out to have better alternatives and the developers decide to transition to the new technology, then this transfer is done in one strong cut. This means no legacy-tech lying around in the system, but the new technology will be fully integrated without having to make compromises because old technology must still be supported.

One example for this is the sysVinit-to-systemd-transition: instead of using systemd’s capabilities to also be compatible with sysVinit the transition was made en bloc. This makes the integration of systemd very smooth and without compromises to backwards compatibility, which simplifys usage a lot as one gets rid of a lot of cornercases that would have otherwise needed to be considered.

Getting rid of old in general

Generally, getting rid of old tech is one of the major things I like about arch. If there is something new that is better than the old solutions or will eventually be adopted, those things will consequently adopted.

Another example is the restructuring of the file system hirarchy, namely the unification of /usr/bin, /usr/sbin, /usr/lib, /bin, /sbin directories /lib into /usr/bin and /usr/lib. Several distributions have announced to perform that unificaton eventually, but instead of doing this successively one after the other (which other distributions even spread out over several major-releases) and dealing with this issue over an extended period of time, the Arch Devs decided to unify all of it en bloc, being done with it once and for all.

Also, instead of trying to be convenient on updates and providing compatibility when something changes, the Arch package manager will tell you that manual intervention is required, you do the necessary changes and your system is migrated to the new, compatibility-layer-free new configuration. Such interventions are, however, rather rare and are prominently announced on the arch-announce-mailinglist, the news feed on the website and the website itself, so just subscribe somewhere and you get notified if manual intervention is required on the next update and what to do. This also means, though, that in Arch you should not do automated updates, but manual ones and read the upgrade log, just in case something comes up.

Documentation

Arch has excellent documentation. The Arch-wiki, especially, but not limited to, the english one is usually an excellent, very detailed source of information to the point that I tend to look up problems that I have in other distributions in the Arch-wiki, because chances are good that the problem is covered in the Arch-documentation. This reaches the point where for me the Arch-wiki sometimes even surpassed project-internal documentation in its usefulness.

Learning

Arch is a quite hand-curated hand-curated distro compared to other distributions out there. One is exposed to the internals of the system as a lot of things in Arch are simply done by hand instead of having a tool that does that (see above). And while I would not claim that Arch is necessarily easy (although simple), in combination with the excellent documentation it can be a very fruitful learning experience. However, it is learning the hard way, so if that is your goal, Arch is in my opinion very suited for that, but you have to be willing to (depending on your previous experience) climb a steep mountain. Nevertheless, I can fully recommend Arch for that purpose if you are willing to do. Especially as, once you are at the top, you get to enjoy all the other benefits mentioned.

Software and Updates

Rolling Release

Arch is not a fixed-point-release-based distro as many others, but a rolling release distro. This means instead of having a new release with new features every couple of weeks, months or years, Arch ships new software more or less as soon as it gets released. Therefore instead of having certain ponts in time where a lot of new stuff is coming in (possibly a lot, possibly requiring migrating to new software versions), new stuff is coming piece by piece spread out in time. This is primary a thing of personal preference, but I find it more convenient to have regular feature updates (99.5% of cases without the need to intervene at all) instead of having to apply and test large updates with a new point-release.

Bleeding Edge

Arch’s rolling-release-model not just ensures that softare comes in spread out over time, but it also allows for software to be shipped once it is released, and Arch makes use of that. This means that Arch is one of the distros with usually the most recent versions of software, getting new stuff as soon as there is a new upstream release.

Vanilla Software

Another thing I really like about software in Arch is that it is shipped as upstream intended, which means the software compiled is usually precisely the code that upstream released, there are, no distribution-specific patches applied. Exceptions to this rule are extremely rare, for example a stability fix in the kernel that was already upstream but not yet released. That patch lived for a short while in the linux-package in Arch, but was removed again once upstream shipped the fix.

AUR

The official package repositories are filled with most of the software one needs for daily business. However, if a software is not in those repositories, one can easily package it oneself, as mentioned above, and a lot of people might have already done exactly that. There exists the Arch User Repository, which is a collection of the PKGBUILDs used for that, along the lines of “hey, I needed that software and packaged it for myself, anyone has a need for the PKGBUILD-file I used?” So if one needs a software that is not in the official repositories, chances are fairly large that someone else already did package that software and one can use that buildscript as a base for the own package. This even applies to a lot of niche-packages you might never have expected to have been packaged at all.

Things to consider

I have pointed out why I like Arch Linux very much and why I definitely can recommend it.

However, be aware of a couple of things:

  • If you just want a distribution that is fire-and-forget and just works out of the box without manual configuration, Arch is not what you are looking for.
  • If you want a rocksolid distribution that is guaranteed to never break in any minor detail, then you are better served with something like a Debian (and its stable-release), as the fact that Arch ships bleeding edge software bears a certain remaining risk of running into bugs no one has run into before and that slipped through upstream’s and the Arch’s testing-procedures.

If none of those are an issue for you (or negligible compared to the advantages), Arch might be a distro very well suited for you.

by Jonas Große Sundrup at 19. February 2017 00:00:00

The beauty of Arch Linux

Recently I have been asked (again) why to use Arch Linux when choosing a linux distribution. Because my fellow inhabitants of the IRC channel affected are always very pleased by the wall of text awaiting them after such a question when they return to the keyboard, I decided to write down my thoughts on that matter so that in the future I can simply link to this post.

I use Archlinux as my main distribution and will lay out why I personally like it. Afterwards I will also line out why you might not want to use Arch Linux. This is merely a collection of arguments and is certainly neither exhaustive nor do I claim to be unbiased in this matter, as I personally like the distro a lot. Ultimately you will have to decide on your own which distribution is best suited for you.

This post is about the specifics of Arch Linux in the family of Linux distributions. It is not about comparing Linux to Windows or OSX or other operating system families. But enough of the chitchat, let’s get to the matter at hand:

Why to use Arch Linux

Simplicity

Arch is developed under the KISS-principle (“Keep It Simple, Stupid”). This does not mean that Arch is necessarily easy but the system and its inherent structure is simple in many various aspects:

Few abstraction layers

Arch generally has very few abstraction layers. Instead of having a tool to autoconfigure things for you, for example, configuration is usually done manually in Arch. This does not mean that something like Network Manager is not available in the package repos, but that the basic system configuration is usually done without additional tooling by editing the respective configuration files. This means that one has a good feeling for what’s going on in one’s own system and how it is structured, because one has made those configurations oneself.

This also includes no automagical merging or changes when an update requires a change in a configuration file. The way it is handled in Arch is that pacman, the distribution’s packacke manager notices if a configfile was changed from the packet’s default and will then install the changed version in the packacke with the extension .pacnew while telling you about this so that you can merge those files on your own after finishing all the upgrades.

Easy, yet powerful system to build packages

Speaking of packages, it is very easy to build packages in Arch. You simply write your shell commands that you would use to compile/install/whatever the software you want to use into a special function in a so called PKGBUILD-file, which is effectively something like a buildscript, call makepkg and you end up with a package to install your software package-manager-controlled. The packet itself is nothing more than a mirroring of the file system tree relevant for that package plus a little metadata for the packet manager.

It is also very simple to rebuild packages from the official repositories, e.g. if one wants to have features compiled in that the default configuration in Arch has disabled (think of a kernel with custom options). asp, packaged in the official repos, provides you with tooling to checkout the sources you need for that purpose. Edit one or two simple files and you are ready to go.

No unnecessary packet splitting

Software in Arch is usually packaged as it is organized by the upstream-developers. This means that if upstream provides one software, this software will end up being one package in Arch and is not splitted into multiple packages, like for splitting out plugins or data files or something like that. This reduces the amount of packages present and increases overview.

Getting rid of old

If a technology used in Arch turns out to have better alternatives and the developers decide to transition to the new technology, then this transfer is done in one strong cut. This means no legacy-tech lying around in the system, but the new technology will be fully integrated without having to make compromises because old technology must still be supported.

One example for this is the sysVinit-to-systemd-transition: instead of using systemd’s capabilities to also be compatible with sysVinit the transition was made en bloc. This makes the integration of systemd very smooth and without compromises to backwards compatibility, which simplifys usage a lot as one gets rid of a lot of cornercases that would have otherwise needed to be considered.

Getting rid of old in general

Generally, getting rid of old tech is one of the major things I like about arch. If there is something new that is better than the old solutions or will eventually be adopted, those things will consequently adopted.

Another example is the restructuring of the file system hirarchy, namely the unification of /usr/bin, /usr/sbin, /usr/lib, /bin, /sbin directories /lib into /usr/bin and /usr/lib. Several distributions have announced to perform that unificaton eventually, but instead of doing this successively one after the other (which other distributions even spread out over several major-releases) and dealing with this issue over an extended period of time, the Arch Devs decided to unify all of it en bloc, being done with it once and for all.

Also, instead of trying to be convenient on updates and providing compatibility when something changes, the Arch package manager will tell you that manual intervention is required, you do the necessary changes and your system is migrated to the new, compatibility-layer-free new configuration. Such interventions are, however, rather rare and are prominently announced on the arch-announce-mailinglist, the news feed on the website and the website itself, so just subscribe somewhere and you get notified if manual intervention is required on the next update and what to do. This also means, though, that in Arch you should not do automated updates, but manual ones and read the upgrade log, just in case something comes up.

Documentation

Arch has excellent documentation. The Arch-wiki, especially the english one, is usually an excellent, very detailed source of information to the point that I tend to look up problems that I have in other distributions in the Arch-wiki, because chances are good that the problem is covered in the Arch-documentation. This reaches the point where for me the Arch-wiki sometimes even surpassed project-internal documentation in its usefulness.

Learning

Arch is a quite hand-curated hand-curated distro compared to other distributions out there. One is exposed to the internals of the system as a lot of things in Arch are simply done by hand instead of having a tool that does that (see above). And while I would not claim that Arch is necessarily easy (although simple), in combination with the excellent documentation it can be a very fruitful learning experience. However, it is learning the hard way, so if that is your goal, Arch is in my opinion very suited for that, but you have to be willing to (depending on your previous experience) climb a steep mountain. Nevertheless, I can fully recommend Arch for that purpose if you are willing to do. Especially as, once you are at the top, you get to enjoy all the other benefits mentioned.

Software and Updates

Rolling Release

Arch is not a fixed-point-release-based distro as many others, but a rolling release distro. This means instead of having a new release with new features every couple of weeks, months or years, Arch ships new software more or less as soon as it gets released. Therefore instead of having certain ponts in time where a lot of new stuff is coming in (possibly a lot, possibly requiring migrating to new software versions), new stuff is coming piece by piece spread out in time. This is primary a thing of personal preference, but I find it more convenient to have regular feature updates (99.5% of cases without the need to intervene at all) instead of having to apply and test large updates with a new point-release.

Bleeding Edge

Arch’s rolling-release-model not just ensures that softare comes in spread out over time, but it also allows for software to be shipped once it is released, and Arch makes use of that. This means that Arch is one of the distros with usually the most recent versions of software, getting new stuff as soon as there is a new upstream release.

Vanilla Software

Another thing I really like about software in Arch is that it is shipped as upstream intended, which means the software compiled is usually precisely the code that upstream released, there are, no distribution-specific patches applied. Exceptions to this rule are extremely rare, for example a stability fix in the kernel that was already upstream but not yet released. That patch lived for a short while in the linux-package in Arch, but was removed again once upstream shipped the fix.

AUR

The official package repositories are filled with most of the software one needs for daily business. However, if a software is not in those repositories, one can easily package it oneself, as mentioned above, and a lot of people might have already done exactly that. There exists the Arch User Repository, which is a collection of the PKGBUILDs used for that, along the lines of “hey, I needed that software and packaged it for myself, anyone has a need for the PKGBUILD-file I used?” So if one needs a software that is not in the official repositories, chances are fairly large that someone else already did package that software and one can use that buildscript as a base for the own package. This even applies to a lot of niche-packages you might never have expected to have been packaged at all.

Things to consider

I have pointed out why I like Arch Linux very much and why I definitely can recommend it.

However, be aware of a couple of things:

  • If you just want a distribution that is fire-and-forget and just works out of the box without manual configuration, Arch is not what you are looking for.
  • If you want a rocksolid distribution that is guaranteed to never break in any minor detail, then you are better served with something like a Debian (and its stable-release), as the fact that Arch ships bleeding edge software bears a certain remaining risk of running into bugs no one has run into before and that slipped through upstream’s and the Arch’s testing-procedures.

If none of those are an issue for you (or negligible compared to the advantages), Arch might be a distro very well suited for you.

by Jonas Große Sundrup at 19. February 2017 00:00:00

07. February 2017

RaumZeitLabor

Hackertour Holzheizkraftwerk Heidelberg

An alle Freunde von Alliterationen und Kraftwerken!

Am Mittwoch, den 22.02.2017 haben wir ab 16.00 Uhr die MĂśglichkeit das Holz-Heizkraftwerk Heidelberg zu besichtigen.

“Strom ohne Atom” - Die Stadtwerke Heidelberg haben sich bis zu diesem Jahr den Komplettausstieg aus der Atomkraft vorgenommen. Eine MaĂŸnahme dazu war der Bau des Holzheizkraftwerks im Pfaffengrund. Die Energie wird zu 90% aus GrĂźnschnitt und sogenanntem Landschaftspflegematerial gewonnen.

Wer wissen will, woraus die restlichen 10% bestehen, was das Kraftwerk an Megawattstunden Strom und Wärme abwirft und was das Ganze mit der Heidelberger Bahnstadt zu tun hat, sollte sich die Tour nicht entgehen lassen.

Wie immer kĂśnnt ihr selbstverständlich auch mitkommen, wenn ihr (noch) kein Mitglied im RaumZeitLabor e.V. seid. Ich bitte allerdings um RĂźckmeldung per Mail[đŸ”Ľ] an mich bis zum 15.02.2017, da die Tour aus SicherheitsgrĂźnden auf 15 Plätze begrenzt ist.



[đŸ”Ľ] Keine Mail, kein Holz, kein Heiz!

by flederrattie at 07. February 2017 00:00:00

28. January 2017

sECuREs Webseite

Atomically writing files in Go

Writing files is simple, but correctly writing files atomically in a performant way might not be as trivial as one might think. Here’s an extensively commented function to atomically write compressed files (taken from debiman, the software behind manpages.debian.org):

package main

import (
    "bufio"
    "compress/gzip"
    "io"
    "io/ioutil"
    "log"
    "os"
    "path/filepath"
)

func tempDir(dest string) string {
    tempdir := os.Getenv("TMPDIR")
    if tempdir == "" {
        // Convenient for development: decreases the chance that we
        // cannot move files due to TMPDIR being on a different file
        // system than dest.
        tempdir = filepath.Dir(dest)
    }
    return tempdir
}

func writeAtomically(dest string, compress bool, write func(w io.Writer) error) (err error) {
    f, err := ioutil.TempFile(tempDir(dest), "atomic-")
    if err != nil {
        return err
    }
    defer func() {
        // Clean up (best effort) in case we are returning with an error:
        if err != nil {
            // Prevent file descriptor leaks.
            f.Close()
            // Remove the tempfile to avoid filling up the file system.
            os.Remove(f.Name())
        }
    }()

    // Use a buffered writer to minimize write(2) syscalls.
    bufw := bufio.NewWriter(f)

    w := io.Writer(bufw)
    var gzipw *gzip.Writer
    if compress {
        // NOTE: gzip’s decompression phase takes the same time,
        // regardless of compression level. Hence, we invest the
        // maximum CPU time once to achieve the best compression.
        gzipw, err = gzip.NewWriterLevel(bufw, gzip.BestCompression)
        if err != nil {
            return err
        }
        defer gzipw.Close()
        w = gzipw
    }

    if err := write(w); err != nil {
        return err
    }

    if compress {
        if err := gzipw.Close(); err != nil {
            return err
        }
    }

    if err := bufw.Flush(); err != nil {
        return err
    }

    // Chmod the file world-readable (ioutil.TempFile creates files with
    // mode 0600) before renaming.
    if err := f.Chmod(0644); err != nil {
        return err
    }

    if err := f.Close(); err != nil {
        return err
    }

    return os.Rename(f.Name(), dest)
}

func main() {
    if err := writeAtomically("demo.txt.gz", true, func(w io.Writer) error {
        _, err := w.Write([]byte("demo"))
        return err
    }); err != nil {
        log.Fatal(err)
    }
}

rsync(1) will fail when it lacks permission to read files. Hence, if you are synchronizing a repository of files while updating it, you’ll need to set TMPDIR to point to a directory on the same file system (for rename(2) to work) which is not covered by your rsync(1) invocation.

When calling writeAtomically repeatedly to create lots of small files, you’ll notice that creating gzip.Writers is actually rather expensive. Modifying the function to re-use the same gzip.Writer yielded a significant decrease in wall-clock time.

Of course, if you’re looking for maximum write performance (as opposed to minimum resulting file size), you should use a different gzip level than gzip.BestCompression.

by Michael Stapelberg at 28. January 2017 21:29:00

Webfont loading with FOUT

For manpages.debian.org, I looked at loading webfonts. I considered the following scenarios:

# local? cached? Network Expected Observed
1 Yes / / perfect render perfect render
2 No Yes / perfect render perfect render
3 No No Fast FOUT FOIT
4 No No Slow FOUT some FOUT, some FOIT

Scenario #1 and #2 are easy: the font is available, so if we inline the CSS into the HTML page, the browser should be able to render the page perfectly on the first try. Unfortunately, the more common scenarios are #3 and #4, since many people reach manpages.debian.org through a link to an individual manpage.

The default browser behavior, if we just specify a webfont using @font-face in our stylesheet, is the Flash Of Invisible Text (FOIT), i.e. the page loads, but text remains hidden until fonts are loaded. On a good 3G connection, this means users will have to wait 500ms to see the page content, which is far too long for my taste. The user experience becomes especially jarring when the font doesn’t actually load — users will just see a spinner and leave the site frustrated.

In comparison, when using the Flash Of Unstyled Text (FOUT), loading time is 250ms, i.e. cut in half! Sure, you have a page reflow after the fonts have actually loaded, but at least users will immediately see the content.

In an ideal world

In an ideal world, I could just specify font-display: swap in my @font-face definition, but the css-font-display spec is unofficial and not available in any browser yet.

Toolbox

To achieve FOUT when necessary and perfect rendering when possible, we make use of the following tools:

CSS font loading API
The font loading API allows us to request a font load before the DOM is even created, i.e. before the browser would normally start processing font loads. Since we can specify a callback to be run when the font is loaded, we can apply the style as soon as possible — if the font was cached or is installed locally, this means before the DOM is first created, resulting in a perfect render.
This API is available in Firefox, Chrome, Safari, Opera, but notably not in IE or Edge.
single round-trip asynchronous font loading
For the remaining browsers, we’ll need to load the fonts and only apply them after they have been loaded. The best way to do this is to create a stylesheet which contains the inlined font files as base64 data and the corresponding styles to enable them. Once the browser loaded the file, it will apply the font, which at that point is guaranteed to be present.
In order to load that stylesheet without blocking the page load, we’ll use Preloading.
Native <link rel="preload"> support is available only in Chrome and Opera, but there are polyfills for the remaining browsers.
Note that a downside of this technique is that we don’t distinguish between WOFF2 and WOFF fonts, we always just serve WOFF. This maximizes compatibility, but means that WOFF2-capable browsers will have to download more bytes than they had to if we offered WOFF2.

Combination

The following flow chart illustrates how to react to different situations:

Putting it all together

Example fonts stylesheet: (base64 data removed for readability)

@font-face {
  font-family: 'Inconsolata';
  src: local('Inconsolata'),
       url("data:application/x-font-woff;charset=utf-8;base64,[…]") format("woff");
}

@font-face {
  font-family: 'Roboto';
  font-style: normal;
  font-weight: 400;
  src: local('Roboto'),
       local('Roboto Regular'),
       local('Roboto-Regular'),
       url("data:application/x-font-woff;charset=utf-8;base64,[…]") format("woff");
}

body {
  font-family: 'Roboto', sans-serif;
}

pre, code {
  font-family: 'Inconsolata', monospace;
}

Example document:

<head>
<style type="text/css">
/* Defined, but not referenced */

@font-face {
  font-family: 'Inconsolata';
  src: local('Inconsolata'),
       url(/Inconsolata.woff2) format('woff2'),
       url(/Inconsolata.woff) format('woff');
}   

@font-face {
  font-family: 'Roboto';
  font-style: normal;
  font-weight: 400;
  src: local('Roboto'),
       local('Roboto Regular'),
       local('Roboto-Regular'),
       url(/Roboto-Regular.woff2) format('woff2'),
       url(/Roboto-Regular.woff) format('woff');
}   
</style>
<script type="text/javascript">
if (!!document['fonts']) {
        /* font loading API supported */
        var r = "body { font-family: 'Roboto', sans-serif; }";
        var i = "pre, code { font-family: 'Inconsolata', monospace; }";
        var l = function(m) {
                if (!document.body) {
                        /* cached, before DOM is built */
                        document.write("<style>"+m+"</style>");
                } else {
                        /* uncached, after DOM is built */
                        document.body.innerHTML+="<style>"+m+"</style>";
                }
        };
        new FontFace('Roboto',
                     "local('Roboto'), " +
                     "local('Roboto Regular'), " +
                     "local('Roboto-Regular'), " +
                     "url(/Roboto-Regular.woff2) format('woff2'), " +
                     "url(/Roboto-Regular.woff) format('woff')")
                .load().then(function() { l(r); });
        new FontFace('Inconsolata',
                     "local('Inconsolata'), " +
                     "url(/Inconsolata.woff2) format('woff2'), " +
                     "url(/Inconsolata.woff) format('woff')")
                .load().then(function() { l(i); });
} else {
        var l = document.createElement('link');
        l.rel = 'preload';
        l.href = '/fonts-woff.css';
        l.as = 'style';
        l.onload = function() { this.rel = 'stylesheet'; };
        document.head.appendChild(l);
}
</script>
<noscript>
  <style type="text/css">
    body { font-family: 'Roboto', sans-serif; }
    pre, code { font-family: 'Inconsolata', monospace; }
  </style>
</noscript>
</head>
<body>

[…content…]

<script type="text/javascript">
/* inlined loadCSS.js and cssrelpreload.js from
   https://github.com/filamentgroup/loadCSS/tree/master/src */
(function(a){"use strict";var b=function(b,c,d){var e=a.document;var f=e.createElement("link");var g;if(c)g=c;else{var h=(e.body||e.getElementsByTagName("head")[0]).childNodes;g=h[h.length-1];}var i=e.styleSheets;f.rel="stylesheet";f.href=b;f.media="only x";function j(a){if(e.body)return a();setTimeout(function(){j(a);});}j(function(){g.parentNode.insertBefore(f,(c?g:g.nextSibling));});var k=function(a){var b=f.href;var c=i.length;while(c--)if(i[c].href===b)return a();setTimeout(function(){k(a);});};function l(){if(f.addEventListener)f.removeEventListener("load",l);f.media=d||"all";}if(f.addEventListener)f.addEventListener("load",l);f.onloadcssdefined=k;k(l);return f;};if(typeof exports!=="undefined")exports.loadCSS=b;else a.loadCSS=b;}(typeof global!=="undefined"?global:this));
(function(a){if(!a.loadCSS)return;var b=loadCSS.relpreload={};b.support=function(){try{return a.document.createElement("link").relList.supports("preload");}catch(b){return false;}};b.poly=function(){var b=a.document.getElementsByTagName("link");for(var c=0;c<b.length;c++){var d=b[c];if(d.rel==="preload"&&d.getAttribute("as")==="style"){a.loadCSS(d.href,d);d.rel=null;}}};if(!b.support()){b.poly();var c=a.setInterval(b.poly,300);if(a.addEventListener)a.addEventListener("load",function(){a.clearInterval(c);});if(a.attachEvent)a.attachEvent("onload",function(){a.clearInterval(c);});}}(this));
</script>
</body>

by Michael Stapelberg at 28. January 2017 15:57:00

31. December 2016

RaumZeitLabor

2k16 interhackerspaces xmas swap

Seit einigen Jahren ist es guter Brauch, dass sich die Besten unter den Hackerspaces zu Weihnachten gegenseitig beschenken. In der Manier des Wichtelns (manchmal auch des Schrott-Wichtelns) werden also am Jahresende Pakete verschickt. Organisiert wird die gesamte Aktion über das Wiki des Hackerspaces Frack aus den Niederlanden.

Dieses Jahr sind wir mit einer kompletten Eigenentwicklung an den Start gegangen und haben als “Leading Hackerspace in Box-Making-Technology” verschlossene Holz-Boxen verschickt, die sich erst nach dem Lösen mehrerer Rätsel elektronisch öffnen und den begehrten Mannheimer Schoko-Wasserturm freigeben.

Mit der artgerechten Gestaltung der Boxen haben wir externe, befreundete Künstlerinnen beauftragt und wir glauben, dass sich das Ergebnis durchaus sehen lassen kann.

Die Nerds des Hacklabor haben eine solche Box von uns erhalten und das ganze Projekt in Aktion kannst Du also hier auf youtube begutachten. Auf eine Videobotschaft der anderen mit Boxen beglückten Hackerspaces warten wir übrigens noch…

xmas 2k16

by s1lvester at 31. December 2016 00:00:00

12. December 2016

RaumZeitLabor

Impressionen der Exkursion zur RNV

Am 9.12. waren wir zu Besuch bei der Rhein-Neckar-Verkehr GmbH und haben uns von der StraĂŸenbahnwaschanlage bis zum Leitstand alles angesehen, was nicht abgeschlossen war. Hier gibt es ein paar Impressionen von der FĂźhrung. Mehr Hackertours sind in Planung

RNV Impressionen

by tabascoeye at 12. December 2016 00:00:00

21. November 2016

sECuREs Webseite

Gigabit NAS (running CoreOS)

tl;dr: I upgraded from a qnap TS-119P to a custom HTPC-like network storage solution. This article outlines what my original reasoning was for the qnap TS-119P, what I learnt, and with what solution precisely I replaced the qnap.

A little over two years ago, I gave a (German) presentation about my network storage setup (see video or slides). Given that video isn’t a great consumption format when you’re pressed on time, and given that a number of my readers might not speak German, I’ll recap the most important points:

  • I reduced the scope of the setup to storing daily backups and providing media files via CIFS.
    I have come to prefer numerous smaller setups over one gigantic setup which offers everything (think a Turris Omnia acting as a router, mail server, network storage, etc.). Smaller setups can be debugged, upgraded or rebuilt in less time. Time-boxing activities has become very important to me as I work full time: if I can’t imagine finishing the activity within 1 or 2 hours, I usually don’t get started on it at all unless I’m on vacation.
  • Requirements: FOSS, relatively cheap, relatively low power usage, relatively high redundancy level.
    I’m looking not to spend a large amount of money on hardware. Whenever I do spend a lot, I feel pressured to get the most out of my purchase and use the hardware for many years. However, I find it more satisfying to be able to upgrade more frequently — just like the update this article is describing :).
    With regards to redundancy, I’m not content with being able to rebuild the system within a couple of days after a failure occurs. Instead, I want to be able to trivially switch to a replacement system within minutes. This requirement results in the decision to run 2 separate qnap NAS appliances with 1 hard disk each (instead of e.g. a RAID-1 setup).
    The decision to go with qnap as a vendor came from the fact that their devices are pretty well supported in the Debian ecosystem: there is a network installer for it, the custom hardware is supported by the qcontrol tool and one can build a serial console connector.
  • The remainder of the points is largely related to software, and hence not relevant for this article, as I’m keeping the software largely the same (aside from the surrounding operating system).

What did not work well

Even a well-supported embedded device like the qnap TS-119P requires too much effort to be set up:

  1. Setting up the network installer is cumbersome.
  2. I contributed patches to qcontrol for setting the wake on LAN configuration on the qnap TS-119P’s controller board and for systemd support in qcontrol.
  3. I contributed my first ever patch to the Linux kernel for wake on LAN support for the Marvell MV643xx series chips.
  4. I ended up lobbying Debian to enable the CONFIG_MARVELL_PHY kernel option, while personally running a custom kernel.

On the hardware side, to get any insight into what the device is doing, your only input/output option is a serial console. To get easy access to that serial console, you need to solder an adapter cable for the somewhat non-standard header which they use.

All of this contributed to my impression that upgrading the device would be equally involved. Logically speaking, I know that this is unlikely since my patches are upstream in the Linux kernel and in Debian. Nevertheless, I couldn’t help but feel like it would be hard. As a result, I have not upgraded my device ever since I got it working, i.e. more than two years ago.

The take-away is that I now try hard to:

  1. use standard hardware which fits well into my landscape
  2. use a software setup which has as few non-standard modifications as possible and which automatically updates itself.

What I would like to improve

One continuous pain point was how slow the qnap TS-119P was with regards to writing data to disk. The slowness was largely caused by full-disk encryption. The device’s hardware accelerator turned out to be useless with cryptsetup-luks’s comparatively small hard-coded 4K block size, resulting in about 6 to 10 MB/s of throughput.

This resulted in me downloading files onto the SSD in my workstation and then transferring them to the network storage. Doing these downloads in a faster environment circumvents my somewhat irrational fears about the files becoming unavailable while you are downloading them, and allows me to take pleasure in how fast I’m able to download things :).

The take-away is that any new solution should be as quick as my workstation and network, i.e. it should be able to write files to disk with gigabit speed.

What I can get rid of

While dramaqueen and autowake worked well in principle, they turned out to no longer be very useful in my environment: I switched from a dedicated OpenELEC-based media center box to using emby and a Chromecast. emby is also nice to access remotely, e.g. when watching series at a friend’s place, or watching movies while on vacation or business trips somewhere. Hence, my storage solution needs to be running 24/7 — no more automated shutdowns and wake-up procedures.

What worked well

Reducing the scope drastically in terms of software setup complexity paid off. If it weren’t for that, I probably would not have been able to complete this upgrade within a few mornings/evenings and would likely have pushed this project out for a long time.

The new hardware

I researched the following components back in March, but then put the project on hold due to time constraints and to allow myself some more time to think about it. I finally ordered the components in August, and they still ranked best with regards to cost / performance ratio and fulfilling my requirements.

Price Type Article
43.49 CHF Case SILVERSTONE Sugo SST-SG05BB-LITE
60.40 CHF Mainboard ASROCK AM1H-ITX
38.99 CHF CPU AMD Athlon 5350 (supports AES-NI)
20.99 CHF Cooler Alpine M1-Passive
32.80 CHF RAM KINGSTON ValueRAM, 8.0GB (KVR16N11/8)
28.70 CHF PSU Toshiba PA3822U-1ACA, PA3822E-1AC3, 19V 2,37A
(To19V_2.37A_5.5x2.5)
0 CHF System disk (OCZ-AGILITY3 60G)
You’ll need to do your own research.
Currently, my system uses 5GB of space,
so chose the smallest SSD you can find.
225.37 CHF total sum

For the qnap TS-119P, I paid 226 CHF, so my new solution is a tad more expensive. However, I had the OCZ-AGILITY3 lying around from a warranty exchange, so bottomline, I paid less than what I had paid for the previous solution.

I haven’t measured this myself, but according to the internet, the setups have the following power consumption (without disks):

  • The qnap TS-119P uses ≈7W, i.e. ≈60 CHF/year for electricity.
  • The AM1H-ITX / Athlon 5350 setup uses ≈20W, i.e. ≈77 CHF/year for electricity.

In terms of fitting well into my hardware landscape, this system does a much better job than the qnap. Instead of having to solder a custom serial port adapter, I can simply connect a USB keyboard and an HDMI or DisplayPort monitor and I’m done.

Further, any linux distribution can easily be installed from a bootable USB drive, without the need for any custom tools or ports.

Full-disk encryption performance

# cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       338687 iterations per second
PBKDF2-sha256     228348 iterations per second
PBKDF2-sha512     138847 iterations per second
PBKDF2-ripemd160  246375 iterations per second
PBKDF2-whirlpool   84891 iterations per second
#  Algorithm | Key |  Encryption |  Decryption
     aes-cbc   128b   468.8 MiB/s  1040.9 MiB/s
     aes-cbc   256b   366.4 MiB/s   885.8 MiB/s
     aes-xts   256b   850.8 MiB/s   843.9 MiB/s
     aes-xts   512b   725.0 MiB/s   740.6 MiB/s

Network performance

As the old qnap TS-119P would only sustain gigabit performance using IPv4 (with TCP checksum offloading), I was naturally relieved to see that the new solution can send packets at gigabit line rate using both protocols, IPv4 and IPv6. I ran the following tests inside a docker container (docker run --net=host -t -i debian:latest):

# nc 10.0.0.76 3000 | dd of=/dev/null bs=5M
0+55391 records in
0+55391 records out
416109464 bytes (416 MB) copied, 3.55637 s, 117 MB/s
# nc 2001:db8::225:8aff:fe5d:53a9 3000 | dd of=/dev/null bs=5M
0+275127 records in
0+275127 records out
629802884 bytes (630 MB) copied, 5.45907 s, 115 MB/s

The CPU was >90% idle using netcat-traditional.
The CPU was >70% idle using netcat-openbsd.

End-to-end throughput

Reading/writing to a disk which uses cryptsetup-luks full-disk encryption with the aes-cbc-essiv:sha256 cipher, these are the resulting speeds:

Reading a file from a CIFS mount works at gigabit throughput, without any tuning:

311+1 records in
311+1 records out
1632440260 bytes (1,6 GB) copied, 13,9396 s, 117 MB/s

Writing works at almost gigabit throughput:

1160+1 records in
1160+1 records out
6082701588 bytes (6,1 GB) copied, 58,0304 s, 105 MB/s

During rsync+ssh backups, the CPU is never 100% maxed out, and data is sent to the NAS at 65 MB/s.

The new software setup

Given that I wanted to use a software setup which has as few non-standard modifications as possible and automatically updates itself, I was curious to see if I could carry this to the extreme by using CoreOS.

If you’re unfamiliar with it, CoreOS is a Linux distribution which is intended to be used in clusters on individual nodes. It updates itself automatically (using omaha, Google’s updater behind ChromeOS) and comes as a largely read-only system without a package manager. You deploy software using Docker and configure the setup using cloud-config.

I have been successfully using CoreOS for a few years in virtual machine setups such as the one for the RobustIRC network.

The cloud-config file I came up with can be found in Appendix A. You can pass it to the CoreOS installer’s -c flag. Personally, I installed CoreOS by booting from a grml live linux USB key, then running the CoreOS installer.

In order to update the cloud-config file after installing CoreOS, you can use the following commands:

midna $ scp cloud-config.storage.yaml core@10.252:/tmp/
storage $ sudo cp /tmp/cloud-config.storage.yaml /var/lib/coreos-install/user_data
storage $ sudo coreos-cloudinit --from-file=/var/lib/coreos-install/user_data

Dockerfiles: rrsync and samba

Since neither rsync nor samba directly provide Docker containers, I had to whip up the following small Dockerfiles which install the latest versions from Debian jessie.

Of course, this means that I need to rebuild these two containers regularly, but I also can easily roll them back in case an update broke, and the rest of the operating system updates independently of these mission-critical pieces.

Eventually, I’m looking to enable auto-build for these Docker containers so that the Docker hub rebuilds the images when necessary, and the updates are picked up either manually when time-critical, or automatically by virtue of CoreOS rebooting to update itself.

FROM debian:jessie
RUN apt-get update \
  && apt-get install -y rsync \
  && gunzip -c /usr/share/doc/rsync/scripts/rrsync.gz > /usr/bin/rrsync \
  && chmod +x /usr/bin/rrsync
ENTRYPOINT ["/usr/bin/rrsync"]
FROM debian:jessie
RUN apt-get update && apt-get install -y samba
ADD smb.conf /etc/samba/smb.conf
EXPOSE 137 138 139 445
CMD ["/usr/sbin/smbd", "-FS"]

Appendix A: cloud-config

#cloud-config

hostname: "storage"

ssh_authorized_keys:
  - ssh-rsa AAAAB… michael@midna

write_files:
  - path: /etc/ssl/certs/r.zekjur.net.crt
    content: |
      -----BEGIN CERTIFICATE-----
      MIIDYjCCAko…
      -----END CERTIFICATE-----
  - path: /var/lib/ip6tables/rules-save
    permissions: 0644
    owner: root:root
    content: |
      # Generated by ip6tables-save v1.4.14 on Fri Aug 26 19:57:51 2016
      *filter
      :INPUT DROP [0:0]
      :FORWARD ACCEPT [0:0]
      :OUTPUT ACCEPT [0:0]
      -A INPUT -p ipv6-icmp -m comment --comment "IPv6 needs ICMPv6 to work" -j ACCEPT
      -A INPUT -m state --state RELATED,ESTABLISHED -m comment --comment "Allow packets for outgoing connections" -j ACCEPT
      -A INPUT -s fe80::/10 -d fe80::/10 -m comment --comment "Allow link-local traffic" -j ACCEPT
      -A INPUT -s 2001:db8::/32 -m comment --comment "local traffic" -j ACCEPT
      -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "SSH" -j ACCEPT
      COMMIT
      # Completed on Fri Aug 26 19:57:51 2016
  - path: /root/.ssh/authorized_keys
    permissions: 0600
    owner: root:root
    content: |
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/midna:/srv/backup/midna stapelberg/rsync /srv/backup/midna" ssh-rsa AAAAB… root@midna
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/scan2drive:/srv/backup/scan2drive stapelberg/rsync /srv/backup/scan2drive" ssh-rsa AAAAB… root@scan2drive
      command="/bin/docker run -i -e SSH_ORIGINAL_COMMAND -v /srv/backup/alp.zekjur.net:/srv/backup/alp.zekjur.net stapelberg/rsync /srv/backup/alp.zekjur.net" ssh-rsa AAAAB… root@alp

coreos:
  update:
    reboot-strategy: "reboot"
  locksmith:
    window_start: 01:00 # UTC, i.e. 02:00 CET or 03:00 CEST
    window_length: 2h  # i.e. until 03:00 CET or 04:00 CEST
  units:
    - name: ip6tables-restore.service
      enable: true

    - name: 00-enp2s0.network
      runtime: true
      content: |
        [Match]
        Name=enp2s0

        [Network]
        DNS=10.0.0.1
        Address=10.0.0.252/24
        Gateway=10.0.0.1
        IPv6Token=0:0:0:0:10::252

    - name: systemd-networkd-wait-online.service
      command: start
      drop-ins:
        - name: "10-interface.conf"
          content: |
            [Service]
            ExecStart=
            ExecStart=/usr/lib/systemd/systemd-networkd-wait-online \
	      --interface=enp2s0

    - name: unlock.service
      command: start
      content: |
        [Unit]
        Description=unlock hard drive
        Wants=network.target
        After=systemd-networkd-wait-online.service
        Before=samba.service
        
        [Service]
        Type=oneshot
        RemainAfterExit=yes
        # Wait until the host is actually reachable.
        ExecStart=/bin/sh -c "c=0; while [ $c -lt 5 ]; do \
	    /bin/ping6 -n -c 1 r.zekjur.net && break; \
	    c=$((c+1)); \
	    sleep 1; \
	  done"
        ExecStart=/bin/sh -c "(echo -n my_local_secret && \
	  wget \
	    --retry-connrefused \
	    --ca-directory=/dev/null \
	    --ca-certificate=/etc/ssl/certs/r.zekjur.net.crt \
	    -qO- https://r.zekjur.net/sdb2_crypt) \
	  | /sbin/cryptsetup --key-file=- luksOpen /dev/sdb2 sdb2_crypt"
        ExecStart=/bin/mount /dev/mapper/sdb2_crypt /srv

    - name: samba.service
      command: start
      content: |
        [Unit]
        Description=samba server
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull stapelberg/samba:latest

        # Set up samba users (cannot be done in the (public) Dockerfile
        # because users/passwords are sensitive information).
        ExecStartPre=-/usr/bin/docker kill smb
        ExecStartPre=-/usr/bin/docker rm smb
        ExecStartPre=-/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep stapelberg/samba \
	  adduser --quiet --disabled-password --gecos "" --uid 29901 michael
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared
        ExecStartPre=/usr/bin/docker rm smb-prep
        ExecStartPre=/usr/bin/docker run --name smb-prep smb-prepared \
	  /bin/sh -c "echo my_password | tee - | smbpasswd -a -s michael"
        ExecStartPre=/usr/bin/docker commit smb-prep smb-prepared

        ExecStart=/usr/bin/docker run \
          -p 137:137 \
          -p 138:138 \
          -p 139:139 \
          -p 445:445 \
          --tmpfs=/run \
          -v /srv/data:/srv/data \
          --name smb \
          -t \
          smb-prepared \
            /usr/sbin/smbd -FS

    - name: emby.service
      command: start
      content: |
        [Unit]
        Description=emby
        After=docker.service srv.mount
        Requires=docker.service srv.mount

        [Service]
        Restart=always
        StartLimitInterval=0

        # Always pull the latest version (bleeding edge).
        ExecStartPre=-/usr/bin/docker pull emby/embyserver

        ExecStart=/usr/bin/docker run \
          --rm \
          --net=host \
          -v /srv/data/movies:/srv/data/movies:ro \
          -v /srv/data/series:/srv/data/series:ro \
          -v /srv/emby:/config \
          emby/embyserver

by Michael Stapelberg at 21. November 2016 16:12:00

16. November 2016

sur5r/blog

Another instance of AC_DEFINE being undefined

While trying to build a backport if i3 4.13 for Debian wheezy (currently oldstable), I stumbled across the following problem:


   dh_autoreconf -O--parallel -O--builddirectory=build
configure.ac:132: error: possibly undefined macro: AC_DEFINE
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
dh_autoreconf: autoreconf -f -i returned exit code 1


Digging around the net I found nothing that helped. So I tried building that package manually. After some trail and error, I noticed that autoreconf as of wheezy seems to ignore AC_CONFIG_MACRO_DIR. Calling autoreconf with -i -f -I m4 solved this.

I finally added this to debian/rules:

override_dh_autoreconf:
        dh_autoreconf autoreconf -- -f -i -I m4

by sur5r (nospam@example.com) at 16. November 2016 00:49:23

29. September 2016

RaumZeitLabor

††† An alle Cheesusfreaks: Heiliges Gefresse mit Käsus Christus †††

KaesusLiebtDich



Es ist an der Zeit, unserem Freund Käsus Christus zu gedenken!

Wir wollen Halloumi-teinander bei einem Abendmahl das Brot brechen, denn mit Freunden zu essen, das ist gut für die Seele. So Comté alle am 22.10.2016 ins RaumZeitLabor nach Mannheim-Emmental. Wenn die Babybel um 18.30 Uhr erklingt, werden wir die heilige Feta mit einem Havarti Unser beginnen. Zögert nicht und nehmt diese Einladung an: Damit wir wissen, wie viel Wein wir aus Wasser machen sollen, gebt uns ein göttliches Zeichen [†] bis zum 19.10.2016. Wir freuen uns, wenn ein jeder 10 Euro in den Klingelbeutel fallen lässt. Denkt daran: Käsus Christus ist auch für euren Stilton gestorben.

Brie-de sei mit euch. Amen.

[†] Keine Mail, kein Käse!

by flederrattie at 29. September 2016 00:00:00

31. August 2016

Mero’s Blog

I've been diagnosed with ADHD

tl;dr: I've been diagnosed with ADHD. I ramble incoherently for a while and I might do some less rambling posts about it in the future.

As the title says, I've been recently diagnosed with ADHD and I thought I'd try to be as open about it as possible and share my personal experiences with mental illness. That being said, I am also adding the disclaimer, that I have no training or special knowledge about it and that the fact that I have been diagnosed with ADHD does not mean I am an authoritative source on its effects, that this diagnoses is going to stick or that my experiences in any way generalize to other people who got the same diagnosis.

This will hopefully turn into a bit of a series of blog posts and I'd like to start it off with a general description of what lead me to look for a diagnosis and treatment in the first place. Some of the things I am only touching on here I might write more about in the future (see below for a non-exhaustive list). Or not. I have not yet decided :)


It is no secret (it's actually kind of a running gag) that I am a serious procrastinator. I always had trouble starting on something and staying with it; my graveyard of unfinished projects is huge. For most of my life, however, this hasn't been a huge problem to me. I was reasonably successful in compensating for it with a decent amount of intelligence (arrogant as that sounds). I never needed any homework and never needed to learn for tests in school and even at university I only spent very little time on both. The homework we got was short-term enough that procrastination was not a real danger, I finished it quickly and whenever there was a longer-term project to finish (such as a seminar-talk or learning for exams) I could cram for a night and get enough of an understanding of things to do a decent job.

However, that strategy did not work for either my bachelor, nor my master thesis, which predictably lead to both turning out a lot worse than I would've wished for (I am not going to go into too much detail here). Self-organized long-term work seemed next to impossible. This problem got much worse when I started working full-time. Now almost all my work was self-organized and long-term. Goals are set on a quarterly basis, the decision when and how and how much to work is completely up to you. Other employers might frown at their employees slacking off at work; where I work, it's almost expected. I was good at being oncall, which is mainly reactive, short-term problem solving. But I was (and am) completely dissatisfied with my project work. I felt that I did not get nearly as much done as I should or as I would want. My projects in my first quarter had very clear deadlines and I finished on time (I still procrastinated, but at least at some point I sat down until I got it done. It still meant staying at work until 2am the day before the deadline) but after that it went downhill fast, with projects that needed to be done ASAP, but not with a deadline. So I started slipping. I didn't finish my projects (in fact, the ones that I didn't drop I am still not done with), I spent weeks doing effectively nothing (I am not exaggerating here. I spent whole weeks not writing a single line of code, closing a single bug or running a single command, doing nothing but browse reddit, twitter and wasting my time in similar ways. Yes, you can waste a week doing nothing, while sitting at your desk), not being able to get myself to start working on anything and hating myself for it.

And while I am mostly talking about work, this affected my personal life too. Mail remains unopened, important errands don't get done, I am having trouble keeping in contact with friends, because I can always write or visit them some other time…

I tried (and had tried over the past years) several systems to organize myself better, to motivate myself and to remove distractions. I was initially determined to try to solve my problems on my own, that I did not really need professional help. However, at some point, I realized that I won't be able to fix this just by willing myself to it. I realized it in the final months of my master thesis, but convinced myself that I don't have time to fix it properly, after all, I have to write my thesis. I then kind of forgot about it (or rather: I procrastinated it) in the beginning of starting work, because things where going reasonably well. But it came back to me around the start of this year. After not finishing any of my projects in the first quarter. And after telling my coworkers and friends of my problems and them telling me that it's just impostor syndrome and a distorted view of myself (I'll go into why they where wrong some more later, possibly).

I couldn't help myself and I couldn't get effective help from my coworkers. So, in April, I finally decided to see a Psychologist. Previously the fear of the potential cost (or the stress of dealing with how that works with my insurance), the perceived complexity of finding one that is both accepting patients that are only publicly insured and is specialized on my particular kinds of issues and the perceived lack of time prevented me from doing so. Apart from a general doubt about its effectiveness and fear of the implications for my self-perception and world-view, of course.

Luckily one of the employee-benefits at my company is free and uncomplicated access to a Mental Health (or "Emotional well-being", what a fun euphemism) provider. It only took a single E-Mail and the meetings happen around 10 meters away from my desk. So I started seeing a psychologist on a regular basis (on average probably every one and a half weeks or so) and talking about my issues. I explained and described my problems and it went about as good as I feared; they tried to convince me that the real issue isn't my performance, but my perception of it (and I concede that I still have trouble coming up with hard, empirical evidence to present to people. Though it's performance review season right now. As I haven't done anything of substance in the past 6 months, maybe I will finally get that evidence…) and they tried to get me to adopt more systems to organize myself and remove distractions. All the while, I got worse and worse. My inability to get even the most basic things done or to concentrate even for an hour, even for five minutes, on anything of substance, combined with the inherent social isolation of moving to a new city and country, lead me into deep depressive episodes.

Finally, when my Psychologist in a session tried to get me to write down what was essentially an Unschedule (a system I knew about from reading "The Now Habit" myself when working at my bachelor thesis and that I even had moderate success with; for about two weeks), I broke down. I told them that I do not consider this a valuable use of these sessions, that I tried systems before, that I tried this particular system before and that I can find these kind of lifestyle advise on my own, in my free time. That the reason I was coming to these sessions was to get systematic, medical, professional help of the kind that I can't get from books. So we agreed, at that point, to pursue a diagnosis, as a precondition for treatment.

Which, basically, is where we are at now. The diagnostic process consisted of several sessions of questions about my symptoms, my childhood and my life in general, of filling out a couple of diagnostic surveys and having my siblings fill them out too (in the hope that they can fill in some of the blanks in my own memory about my childhood) and of several sessions of answering more questions from more surveys. And two weeks ago, I officially got the diagnosis ADHD. And the plan to attack it by a combination of therapy and medication (the latter, in particular, is really hard to get from books, for some reason :) ).

I just finished my first day on Methylphenidate (the active substance in Ritalin), specifically Concerta. And though this is definitely much too early to actually make definitive judgments on its effects and effectiveness, at least for this one day I was feeling really great and actively happy. Which, coincidentally, helped me to finally start on this post, to talk about mental health issues; a topic I've been wanting to talk about ever since I started this blog (again), but so far didn't really felt I could.


As I said, this is hopefully the first post in a small ongoing series. I am aware that it is long, rambling and probably contentious. It definitely won't get all my points across and will change the perception some people have of me (I can hear you thinking how all of this doesn't really speak "mental illness", how it seems implausible that someone with my CV would actually, objectively, get nothing done and how I am a drama queen and shouldn't try to solve my problems with dangerous medication). It's an intentionally broad overview of my process and its main purpose is to "out" myself publicly and create starting points for multiple, more specific, concise, interesting and convincing posts in the future. Things I might, or might not talk about are

  • My specific symptoms and how this has and still is influencing my life (and how, yes, this is actual an illness, not just a continuous label). In particular, there are things I wasn't associating with ADHD, which turn out to be relatively tightly linked.
  • How my medication is specifically affecting me and what it does to those symptoms. I can not overstate how fascinated I am with today's experience. I was wearing a visibly puzzled expression all day because I couldn't figure out what was happening. And then I couldn't stop smiling. :)
  • Possibly things about my therapy? I really don't know what to expect about that, though. Therapy is kind off the long play, so it's much harder to evaluate and talk about its effectiveness.
  • Why I consider us currently to be in the Middle Ages of mental health and why I think that in a hundred years or so people will laugh at how we currently deal with it. And possibly be horrified.
  • My over ten years (I'm still young, mkey‽) of thinking about my own mental health and mental health in general and my thoughts of how mental illness interacts with identity and self-definition.
  • How much I loathe the term "impostor syndrome" and why I am (still) convinced that I don't get enough done, even though I can't produce empirical evidence for that and people try to convince me otherwise. And what it does to you, to need to take the "I suck" side of an argument and still don't have people believe you.

Let me know, what you think :)

31. August 2016 02:22:38

15. August 2016

sur5r/blog

Calculating NXP LPC43xx checksums using srecord 1.58

As eveybody on the internet seems to be relying on either Keil on Windows or precompiled binaries from unknown sources to generate their checksums I investigated a bit...

Assuming you ran something like this to generate your binary image:

arm-none-eabi-objcopy -v -O binary firmware.elf firmware.bin

The resulting firmware.bin is still lacking the checksum which LPC chips use to check the validity of the user code.

The following command will create a file firmware_out.bin with the checksum applied:

srec_cat firmware.bin -binary -crop 0x0 0x1c -le-checksum-negative 0x1C 4 4 firmware.bin -binary -crop 0x20 -o firmware_out.bin -binary

by sur5r (nospam@example.com) at 15. August 2016 15:05:54

19. July 2016

RaumZeitLabor

MRMCD 2016: Vorverkauf läuft

Vorr. T-Shirt-Motiv

English version below

Seit einigen Wochen läuft der Vorverkauf zu den MRMCD 2016. Tickets und T-Shirts können noch bis Ende Juli unter presale.mrmcd.net bestellt werden.

Die Teilnahme am Vorverkauf erleichtert unsere Arbeit sehr, da er uns eine bessere Planung ermöglicht und die finanziellen Mittel verschafft, die wir vor der Konferenz schon brauchen. Es wird eine Abendkasse geben, an der allerdings keine T-Shirts und nur begrenzt Goodies erhältlich sind.

Wir sind unter klinikleitung@mrmcd.net für alle Fragen erreichbar.

Die MRMCD (MetaRheinMainChaosDays) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des CCC mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre. Das diesjährige Motto “diagnose: kritisch” setzt einen Themenschwerpunkt auf IT und Innovation rund um Medizin und Gesundheit.

Wir freuen uns bis zum 25.07. auch noch über zahlreiche Vortragseinreichungen im frab. Weitere Informationen gibt es auf unserer Website mrmcd.net.

The presale of this year’s MRMCD tickets has started a few weeks ago, you can buy your tickets and t-shirts at presale.mrmcd.net. The presale runs until July 25th.

With buying your tickets in advance, you make organizing this conference a lot easier for us. It enables us to properly plan the event and gives us the money we need to have in advance to buy all the things a conference needs. There will be a ticket sale on-site, but no t-shirts and no guaranteed goodies.

If you have any questions, please contact us at klinikleitung@mrmcd.net.

The MRMCD (MetaRheinMainChaosDays) are an annual IT conference of the CCC with a slight focus on IT security. MRMCD have been taking place in the Rhine-Main area for over 10 years. Ever since 2012 we cooperate with the the faculty of Computer Science of the University of Applied Sciences Darmstadt (h_da). Apart from the conference program, the MRMCD provide the opportunity of exchanges with IT experts in a relaxed atmosphere. This year’s motto “diagnosis: critical” sets a special focus on IT and innovation in the medical and health field.

We are still accepting talk submissions until the 25th of July and we look forward to your submission at frab.cccv.de. You can find further information on our website

by Alexander at 19. July 2016 00:00:00

28. June 2016

RaumZeitLabor

Impressionen aus dem Aufzugmuseum Seckenheim

Vergangenen Freitag hat sich eine kleine Delegation RaumZeitLaboranten aufgemacht, um den Seckenheimer Wasserturm zu besichtigen. Im Inneren befindet sich ein einizigartiges und sehr sehenswertes Museum rund ums Thema Aufzug.

In der rund zweistündigen Führung konnten wir allerlei Spannendes und Kurioses über Aufzüge und exklusive Insiderinformationen über die Firma Lochbühler erfahren - alles untermalt mit passenden Ausstellungsstücken. Eines der Highlights war der voll funktionstüchtige Paternoster, für dessen Befahrung wir uns allerdings vorher als Paternoster-Puppen hätten bewerben müssen.

Zum Schluss der Tour durften wir von der Lochbühler’schen Bar aus, die sich direkt unter der Wasserturm-Kuppel befindet, den Ausblick über Mannheim genießen.

Alles in allem ein gelungener Ausflug: Das Prädikatssiegel »Außenstelle des RaumZeitLabors« wurde selbstverständlich erteilt.

Aufzugsmuseum

by tabascoeye at 28. June 2016 00:00:00

20. June 2016

Moredreads Blog

16. June 2016

sECuREs Webseite

Conditionally tunneling SSH connections

Whereas most of the networks I regularly use (home, work, hackerspace, events, …) provide native IPv6 connectivity, sometimes I’m in a legacy-only network, e.g. when tethering via my phone on some mobile providers.

By far the most common IPv6-only service I use these days is SSH to my computer(s) at home. On philosophical grounds, I refuse to set up a dynamic DNS record and port-forwardings, so the alternative I use is either Miredo or tunneling through a dual-stacked machine. For the latter, I used to use the following SSH config:

Host home
        Hostname home.zekjur.net
        ProxyCommand ssh -4 dualstack -W %h:%p

The issue with that setup is that it’s inefficient when I’m in a network which does support IPv6, and it requires me to authenticate to both machines. These are not huge issues of course, but they bother me enough that I’ve gotten into the habit of commenting out/in the ProxyCommand directive regularly.

I’ve discovered that SSH can be told to use a ProxyCommand only when you don’t have a route to the public IPv6 internet, though:

Match host home exec "ip -6 route get 2001:7fd::1 | grep -q unreachable"
        ProxyCommand ssh -4 dualstack -W %h:%p

Host home
        Hostname home.zekjur.net

The IPv6 address used is from k.root-servers.net, but no packets are being sent — we merely ask the kernel for a route. When you don’t have an IPv6 address or default route, the ip command will print unreachable, which enables the ProxyCommand.

For debugging/verifying the setup works as expected, use ssh -vvv and look for lines prefixed with “Debug3”.

by Michael Stapelberg at 16. June 2016 17:20:00

02. June 2016

RaumZeitLabor

Exkursion in das Aufzugsmuseum in Seckenheim

Aufzugmuseum

Endlich ist es mal wieder soweit: Ausflugszeit! Oder vielmehr: Aufzugszeit! Gemeinsam werden wir das Aufzugsmuseum in Seckenheim besichtigen.

Der Wasserturm Seckenheim wurde von der Firma Lochbühler zu einem Museum umgebaut, in dem funktionsfähige Aufzüge, Komponenten und ihre Technik ab Ende des 19. Jahrhunderts zu besichtigen sind. Neben den »normalen« Aufzügen gibt es im Museum auch einen Paternoster zu besichtigen und - wenn wir lieb sind - vielleicht auch zu befahren.

Das ganze findet am Freitag, den 24. Juni 2016, statt. Wir treffen uns um 18.15 Uhr vor dem Eingang an der Kloppenheimer Straße 94 in Mannheim-Seckenheim. Die Führung beginnt dann um 18.30 Uhr und dauert voraussichtlich anderthalb Stunden.

Die maximale Teilnehmerzahl ist mit 20 Personen beschränkt, daher ist eine vorherige Anmeldung erforderlich. Um euch anzumelden, schreibt ihr mir einfach eine Mail mit dem Betreff “Exkursion Aufzugsmuseum” und der Anzahl gewünschter Plätze. Die Anmeldungen müssen bis zum 22. Juni 2016, 13:37 Uhr bei mir eingegangen sein.

Weitere Informationen findet ihr zum Beispiel auf der Seite der Rhein-Neckar-Industriekultur.

by flederrattie at 02. June 2016 00:00:00

01. June 2016

Moredreads Blog

26. May 2016

mfhs Blog

Introduction to awk programming block course

Last year I taught a course about bash scripting, during which I briefly touched on the scripting language awk. Some of the attending people wanted to hear more about this, so I was asked by my graduate school to prepare a short block course on awk programming for this year. The course will be running from 15th till 17th August 2016 and registration is now open. You can find an outline and further information on the "Introduction to awk" course page. If you cannot make the course in person, do not worry: All course material will be published both on the course website and on github afterwards.

by mfh at 26. May 2016 00:00:19

10. May 2016

sECuREs Webseite

Supermicro X11SSZ-QF workstation mainboard

Context

For the last 3 years I’ve used the hardware described in my 2012 article. In order to drive a hi-dpi display, I needed to install an nVidia graphics card, since only the nVidia hardware/software supported multi-tile displays requiring MST (Multiple Stream Transport) such as the Dell UP2414Q. While I’ve switched to a Viewsonic VX2475Smhl-4K in the meantime, I still needed a recent-enough DisplayPort output that could deliver 3840x2160@60Hz. This is not the case for the Intel Core i7-2600K’s integrated GPU, so I needed to stick with the nVidia card.

I then stumbled over a video file which, when played back using the nouveau driver’s VDPAU functionality, would lock up my graphics card entirely, so that only a cold reboot helped. This got me annoyed enough to upgrade my hardware.

Why the Supermicro X11SSZ-QF?

Intel, my standard pick for mainboards with good Linux support, unfortunately stopped producing desktop mainboards. I looked around a bit for Skylake mainboards and realized that the Intel Q170 Express chipset actually supports 2 DisplayPort outputs that each support 3840x2160@60Hz, enabling a multi-monitor hi-dpi display setup. While I don’t currently have multiple monitors and don’t intend to get another monitor in the near future, I thought it’d be nice to have that as a possibility.

Turns out that there are only two mainboards out there which use the Q170 Express chipset and actually expose two DisplayPort outputs: the Fujitsu D3402-B, and the Supermicro X11SSZ-QF. The Fujitsu one doesn’t have an integrated S/PDIF output, which I need to play audio on my Denon AVR-X1100W without a constant noise level. Also, I wasn’t able to find software downloads or even a manual for the board on the Fujitsu website. For Supermicro, you can find the manual and software very easily on their website, and because I bought Supermicro hardware in the past and was rather happy with it, I decided to go with the Supermicro option.

I’ve been using the board for half a year now, without any stability issues.

Mechanics and accessories

The X11SSZ-QF ships with a printed quick reference sheet, an I/O shield and 4 SATA cables. Unfortunately, Supermicro apparently went for the cheapest SATA cables they could find, as they do not have a clip to ensure they don’t slide off of the hard disk connector. This is rather disappointing for a mainboard that costs more than 300 CHF. Further, an S/PDIF bracket is not included, so I needed to order one from the USA.

The I/O shield comes with covers over each port, which I assume is because the X11SSZ mainboard family has different ports (one model has more ethernet ports, for example). When removing the covers, push them through from the rear side of the case (if you had installed it already). If you do it from the other side, a bit of metal will remain in each port.

Due to the positioning of the CPU socket, with my Fractal Design Define R3 case, one cannot reach the back of the CPU fan bracket when the mainboard is installed in the case. Hence, you need to first install the CPU fan, then install the mainboard. This is doable, you just need to realize it early enough and think about it, otherwise you’ll install the mainboard twice.

Integrated GPU not initialized

The integrated GPU is not initialized by default. You need to either install an external graphics card or use IPMI to enter the BIOS and change Advanced → Chipset Configuration → Graphics Configuration → Primary Display to “IGFX”.

For using IPMI, you need to connect the ethernet port IPMI_LAN (top right on the back panel, see the X11SSZ-QF quick reference guide) to a network which has a DHCP server, then connect to the IPMI’s IP address in a browser.

Overeager Fan Control

When I first powered up the mainboard, I was rather confused by the behavior: I got no picture (see above), but LED2 was blinking, meaning “PWR Fail or Fan Fail”. In addition, the computer seemed to turn itself off and on in a loop. After a while, I realized that it’s just the fan control which thinks my slow-spinning Scythe Mugen 3 Rev. B CPU fan is broken because of its low RPM value. The fan control subsequently spins up the fan to maximum speed, realizes the CPU is cool enough, spins down the fan, realizes the fan speed is too low, spins up the fan, etc.

Neither in the BIOS nor in the IPMI web interface did I find any options for the fan thresholds. Luckily, you can actually introspect and configure them using IPMI:

# apt-get install freeipmi-tools
# ipmi-sensors-config --filename=ipmi-sensors.config --checkout

In the human-readable text file ipmi-sensors.config you can now introspect the current configuration. You can see that FAN1 and FAN2 have sections in that file:

Section 607_FAN1
 Enable_All_Event_Messages Yes
 Enable_Scanning_On_This_Sensor Yes
 Enable_Assertion_Event_Lower_Critical_Going_Low Yes
 Enable_Assertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Assertion_Event_Upper_Critical_Going_High Yes
 Enable_Assertion_Event_Upper_Non_Recoverable_Going_High Yes
 Enable_Deassertion_Event_Lower_Critical_Going_Low Yes
 Enable_Deassertion_Event_Lower_Non_Recoverable_Going_Low Yes
 Enable_Deassertion_Event_Upper_Critical_Going_High Yes
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
 Lower_Non_Critical_Threshold 700.000000
 Lower_Critical_Threshold 500.000000
 Lower_Non_Recoverable_Threshold 300.000000
 Upper_Non_Critical_Threshold 25300.000000
 Upper_Critical_Threshold 25400.000000
 Upper_Non_Recoverable_Threshold 25500.000000
 Positive_Going_Threshold_Hysteresis 100.000000
 Negative_Going_Threshold_Hysteresis 100.000000
EndSection

When running ipmi-sensors, you can see the current temperatures, voltages and fan readings. In my case, the fan spins with 700 RPM during normal operation, which was exactly the Lower_Non_Critical_Threshold in the default IPMI config. Hence, I modified my config file as illustrated by the following diff:

--- ipmi-sensors.config	2015-11-13 11:53:00.940595043 +0100
+++ ipmi-sensors-fixed.config	2015-11-13 11:54:49.955641295 +0100
@@ -206,11 +206,11 @@
 Enable_Deassertion_Event_Upper_Non_Recoverable_Going_High Yes
- Lower_Non_Critical_Threshold 700.000000
+ Lower_Non_Critical_Threshold 400.000000
- Lower_Critical_Threshold 500.000000
+ Lower_Critical_Threshold 200.000000
- Lower_Non_Recoverable_Threshold 300.000000
+ Lower_Non_Recoverable_Threshold 0.000000
 Upper_Non_Critical_Threshold 25300.000000

You can install the new configuration using the --commit flag:

# ipmi-sensors-config --filename=ipmi-sensors-fixed.config --commit

You might need to shut down your computer and disconnect power for this change to take effect, since the BMC is running even when the mainboard is powered off.

S/PDIF output

The S/PDIF pin header on the mainboard just doesn’t work at all. It does not work in Windows 7 (for which the board was made), and it doesn’t work in Linux. Neither the digital nor the analog part of an S/PDIF port works. When introspecting the Intel HDA setup of the board, the S/PDIF output is not even hooked up correctly. Even after fixing that, it doesn’t work.

Of course, I’ve contacted the Supermicro support. After making clear to them what my use-case is, they ordered (!) an S/PDIF header and tested the analog part of it. Their technical support claims that their port is working, but they never replied to my question with which operating system they tested that, despite me asking multiple times.

It’s pretty disappointing to see that the support is unable to help here or at least confirm that it’s broken.

To address the issue, I’ve bought an ASUS Xonar DX sound card. It works out of the box on Linux, and its S/PDIF port works. The S/PDIF port is shared with the Line-in/Mic-in jack, but a suitable adapter is shipped with the card.

Wake-on-LAN

I haven’t gotten around to using Wake-on-LAN or Suspend-to-RAM yet. I will amend this section when I get around to it.

Conclusion

It’s clear that this mainboard is not for consumers. This begins with the awkward graphics and fan control setup and culminates in the apparently entirely untested S/PDIF output.

That said, once you get it working, it works reliably, and it seems like the only reasonable option with two onboard DisplayPort outputs.

by Michael Stapelberg at 10. May 2016 18:46:00

19. April 2016

RaumZeitLabor

MRMCD 2016: CFP

MRMCD Logo 2016

tl;dr: Die MRMCD 2016 finden vom 02. bis 04. September in Darmstadt statt.

Die MRMCD-Klinik Darmstadt ist eine IT-Security-Klinik der Maximalversorgung. Wir sind auf der Suche nach praktizierenden Teilnehmern aus allen Bereichen der IT und Hackerkultur und würden uns freuen, wenn Sie einen Teil zu unserem Behandlungsprogramm beitragen können. Die Schwerpunkte unserer Arbeit liegen unter anderem auf der Netzwerkchirurgie und Kryptographie, der plastischen und rekonstruktiven Open-Source-Entwicklung aber auch dem Einsatz von Robotern in der Medizin. Eingebunden ist eines der größten und modernsten Zentren in Deutschland für die Behandlung schwerer und schwerster Medical Device Hacks sowie eine Klinik bei Verletzungen der Hackethik. Somit sind die MRMCD2016 das ideale Umfeld für Ihre aktuellen Forschungsprojekte. Unser fachkundiges Publikum interessiert sich für innovative Therapieansätze aus allen Bereichen der IT. Eine Erstdiagnose der Konferenzthemen aus den vergangenen Jahren ist im Klinikarchiv verfügbar.

Sie sind bereit für neue Herausforderungen?

Reichen Sie Ihre aussagekräftige Bewerbung bis zum 25.07.2016 in unserem Bewerberportal ein. Auch wenn Ihre gewünschte Fachdisziplin nicht zu unseren Schwerpunkten gehören sollte, freuen wir uns selbstverständlich auf eine Initiativbewerbung. Allen neuen Mitarbeiterinnen und Mitarbeitern wird die Stellenvergabe am 01.08.2016 bekannt gegeben. Sofern Ihre Bewerbung im Rahmen unseres CFPs Berücksichtigung findet, können Sie sich schon jetzt auf unsere bekannte Rundumversorgung in der exklusiven Chefarztlounge freuen.

Fragen zu den Bewerbungen und zur MRMCD-Klinik beantwortet das Personal per E-Mail.

Die MetaRheinMainChaosDays (MRMCD) sind eine seit mehr als zehn Jahren jährlich stattfindende IT-Konferenz des Chaos Computer Clubs (CCC) mit einer leichten thematischen Ausrichtung zur IT-Sicherheit. Seit 2012 findet die Veranstaltung in Zusammenarbeit mit dem Fachbereich Informatik an der Hochschule Darmstadt (h_da) statt. Neben einem hochwertigen Vortragsprogramm bieten die MRMCD die Möglichkeit zum entspannten Austausch mit zahlreichen IT-Experten im Rahmen einer zwanglosen Atmosphäre.

Die diesjährigen MRMCD finden vom 2.-4. September statt. Weitere Infos finden sich auf 2016.mrmcd.net.

by Alexander at 19. April 2016 00:00:00