Planet NoName e.V.

29. August 2015

RaumZeitLabor

Seminar: Bettwäsche selbst besticken und nähen

Am Samstag den 10. Oktober 2015, 10-23 Uhr gibt es ein Seminar, bei dem die Teilnehmer lernen eigene Stickmotive zu erstellen, diese auf Polyester-Microfaser auszusticken und daraus dann Kopfkissen- oder Deckenbezüge mit verdecktem Reißverschluss zu nähen:

Verdeckter Reißverschluss

Wer teilnehmen will muss sich bis zum 26.09.2015 23:59 per E-Mail bei mir anmelden. Die Zahl der Teilnehmer ist begrenzt, bei mehr Anmeldungen als Plätzen entscheidet das Los.

Für einen normalen Kopfkissenbezug braucht man 2m Stoff (14€) und einen Reißverschluss mit Schieber (5€). Für eine Bettdecke braucht man 5m Stoff (35€) und einen langen Reißverschluss mit Schieber (6€). Für Applikationen braucht man zusätzlich 1m Stoff pro Farbe (7€), da kann man ggf. Reststücke nehmen falls mehrere Teilnehmer die gleichen Farben brauchen.

Es sind keine Vorkenntnisse notwendig, es ist aber sinnvoll vor dem Workshop die aktuelle Version von Inkscape zu installieren und die ersten beiden Anleitungen durchzugehen, außerdem gibt es im RZL-Wiki eine Anleitung zu dem Thema, das ist aber alles optional. Wenn wir am Samstag nicht fertig werden wird der Workshop am Sonntag ab 12 Uhr fortgesetzt.

Die Anmeldung muss folgende Angaben enthalten:

  • Typ (Kopfkissen- oder Deckenbezug).
  • Abmessungen.
  • Gewünschte Trägerfarbe, ggf. Applikationsfarben. Ihr könnt auch selbst Stoff besorgen, wenn euch das lieber ist. Der Microfaser-Stoff hat einen seidigen, glatten Griff und fühlt sich auf der Haut angenehm kühl an.
  • Gewünschte Reißverschluss-Farbe (ist ggf. nicht verfügbar, dann nehme ich eine ähnliche, sieht man am Ende sowieso nicht).
  • Das gewünschte Stickmotiv, damit ich mir das vorher ansehen kann, ob das überhaupt realisierbar ist.

Hier sind zwei Beispiele, bei denen ich nachleuchtendes und normales Garn kombiniert habe:

Nachleuchtendes Stickmotiv, das im Dunkeln die Augen schließt

Bei dem zweiten Beispiel habe ich Körper- und Mähnenfüllung als Applikation realisiert:

Nachleuchtendes Stickmotiv mit Applikationen

29. August 2015 00:00:00

25. August 2015

RaumZeitLabor

MRMCD 2015: Keynote durch Dr. Paolo Ferri

Liebe Sportsfreunde,

MRMCD 2015 ESA-Piktogramm

ganz stolz und getreu dem Motto “Schneller. Höher. Weiter.” können wir euch den besten Speaker präsentieren, mit dem ihr schneller, höher und weiter kommt: Dr. Paolo Ferri! Head of Mission Operations der ESA (Leiter aller unbemannter Missionen der ESA). Er wird uns über das Ende der überaus hoch oben ausgetragenen Rosetta-Mission auf dem schnelle Kometen Tschuri und dem Verbleib des weit geflogenen Ausdauersportlers Philae berichten. Er wird Euch auch verraten, wie Ihr selber Raketenwissenschaft betreiben könnt, z.B. bei ESOC und der ESA. Sicherlich wird er auch Zeit für Eure Fragen haben!

Austragung der Veranstaltung: Freitag 04.09., 17:00

Mittlerweile wurde auch der Rest des Wettkampfkalenders (Fahrplan) veröffentlicht. Was euch dieses Jahr bei den MRMCD erwartet, findet ihr unter: mrmcd.net

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

25. August 2015 00:00:00

14. August 2015

RaumZeitLabor

MRMCD 2015: Kleiner Einblick in den Fahrplan

Liebe Sportsfreunde,

MRMCD 2015 Plakat

Da der Vorverkauf bald endet, haben wir schon mal eine Liste mit besonders interessanten Vorträgen zusammengestellt, um euch einen kleinen Einblick zu geben, was es alles an Talks auf den MRMCD15 geben wird. So wird es einen Vortrag darüber geben was man alles mit den Daten machen kann, die die Bahn uns eigentlich nicht geben möchte und wie man mit undokumentierten Nahverkehrs-APIs Spaß haben kann. Wenn man dann erstmal alle Daten abgesaugt hat muss man irgendwas damit tun; dazu gibt es einen Vortrag der die parallele Verarbeitung großer Datenmengen mit Hilfe von Apache Spark vorstellt. Natürlich gibt es wieder viele Vorträge rund um die Überwachung durch Fimen und Geheimdienste und und welche Auswirkungen dies auf die eigenen Daten hat. Darüber hinaus gibt es Vorträge von Bioinformatik bis hin zu Reisen zum Mars. Freifunker und Netzwerkbegeisterte kommen bei den MRMCD auch auf ihre Kosten, sei es in Vorträgen zum Themen wie VPNs und Wireless Physical Layer Security oder beim Freifunk Community Meetup Begleitet wird das vielfältige Vortragsprogramm dabei durch mehrere Workshops zum Umgang mit IPv6, Ghostscript, Markdown und ImageMagick. Das ist aber noch längst nicht alles, denn passend zu unserem sportlichen Motto wird es spannende und lustige Wettkämpfe geben. Der sportliche Höhepunkt ist dann am Sonntag die Teilnahme des Team MRMCD an einem Triathlon in Darmstadt.

Wenn du noch kein Ticket hast hast du noch bis zum 16. August Zeit, dir eins zu klicken. Es wird zwar eine Abendkasse geben, aber an der Abendkasse gibt es voraussichtlich keine unserer wie immer großartigen Tassen oder Gadgets.

Die MRMCD finden dieses Jahr vom 4. bis 6. September in der Hochschule Darmstadt statt. Wie jedes Jahr bietet sie ein breites Programm an Vorträgen und eine entspannte Stimmung, um sich mit interessanten Menschen auszutauschen. Alle weiteren Infos gibt es unter mrmcd.net.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

14. August 2015 00:00:00

09. August 2015

RaumZeitLabor

Bericht Stick-WE

Am 25./26.07. war im RZL wieder Stickwochenende, dieses Mal mit Besuch aus dem Entropia.

Nervengift hat dann die maximale Breite von unseres größten Rahmen voll ausgenutzt und ein 495mm breites regenbogenfarbenes Einhorn auf seinen Laborkittel gestickt:

Einhorn

Habo war wieder mit seiner Familie da und hat mit Jens an der Kreuzstichfunktionalität von Stickes gearbeitet. Stickes ist eine Software, die aus diversen Fraktalen Stickmotive erzeugen kann. Über eine Eingabemaske für L-Systeme kann man inzwischen fast alle Fraktale sticken, die sich im Lindemayer-System darstellen lassen.

Auch die Drachenkurve, die wir schon auf dem ersten Stick-Wochenende gestickt haben wurde zweimal ausgestickt:

Drachenkurve

Jiska hat sich auf einen Polo das Logo des diesjährigen CC-Camps gestickt:

Camp-Logo

Marsi hat mit silbernem Glitzergarn auf Mikrofaser gestickt:

Yoga-Cats

Und ich habe mein neustes Luna-Motiv auf Stoff gestickt und damit einen Kopfkissenbezug genäht.

Dabei habe ich nachleuchtendes und Seiden-Garn so kombiniert, dass Luna die Augen schließt sobald es dunkel wird:

Luna

Am Sonntag Abend, nachdem die meisten gegangen waren haben Jens und ich noch einen Hoodie mit dem Camplogo in groß (98k Stiche) bestickt.

Außerdem habe ich ein bischen an dem Garnständer für kleine Konen weitergebaut:

Garnständer für kleine Konen

Inzwischen ist er fertig und hängt im Fablab an der Wand.

Insgesamt haben wir 444.443 Stiche gemacht und dabei jede Menge Spaß gehabt. Vielen Dank auch den Besuchern, die tolle Ideen mitgebracht haben und mit denen wir uns über Sticktechniken austauschen konnten.

Spätestens nächstes Jahr werden wir die Veranstaltung wiederholen, bis dahin wünschen wir unseren Gästen wenig Fadenbrüche und immer genug Unterfaden in der Maschine.

Euer StickZeitLabor

09. August 2015 00:00:00

31. July 2015

RaumZeitLabor

MRMCD 2015: Ende von CfP und Vorverkauf rückt näher

Liebe Sportsfreunde,

MRMCD 2015 Plakat

Auf den MRMCDs gibt es traditionell nicht nur die besten Veranstaltungstassen, die meisten coolen Gadgets, die beste rund-um-die-Uhr Frühstücksverpflegung inklusive leistungssteigernder Substanzen sondern auch ein reichhaltiges Vortragsprogramm inclusive Workshops, Lightning Talks und dieses Jahr zusätzlich Wettkämpfe.

Um das alles planen und anbieten zu können sind wir auf Deine Mithilfe angewiesen:

Wenn du einen Vortrag halten oder einen Workshop / Wettkampf anbieten willst aber ihn noch nicht eingereicht hast begib dich bitte ins frab. Gehe sofort dorthin, gehe nicht über Los und streiche nicht 400k€ ein, dafür bekommst du aber Zugang zu unserer exklusiven Speaker-Lounge. Dort kannst du entspannen, deinen Vortrag vorbereiten und diverse Leckereien genießen.

Der CfP endet am 09. August, am 11. August geben wir den Speakern Bescheid und wenig später veröffentlichen wir das Programm.

Wenn du noch kein Ticket hast hast du noch bis zum 16. August Zeit, dir eins zu klicken. Es wird zwar eine Abendkasse geben, aber an der Abendkasse gibt es voraussichtlich keine unserer wie immer großartigen Tassen oder Gadgets. Wir möchten euch bitten, den Vorverkauf wirklich zu nutzen, da wir zur sinnvollen Planung auf ihn angewiesen sind. Die untenstehende Grafik aus dem letzten Jahr zeigt, warum: Die meisten unserer Ausgaben müssen wir im Zeitraum vor der Veranstaltung tätigen, während die meisten Einnahmen während der Veranstaltung passieren. Das macht uns die Arbeit im Vorfeld ziemlich schwierig, und ein aktiv genutzter Vorverkauf hilft uns sehr.

Ausgaben 2014

Die MRMCD finden dieses Jahr vom 4. bis 6. September in der Hochschule Darmstadt statt. Wie jedes Jahr bietet sie ein breites Programm an Vorträgen und eine entspannte Stimmung, um sich mit interessanten Menschen auszutauschen. Alle weiteren Infos gibt es unter mrmcd.net.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

31. July 2015 00:00:00

29. July 2015

Mero’s Blog

Backwards compatibility in go

tl;dr: There are next to no "backwards compatible API changes" in go. You should explicitely name your compatibility-guarantees.

I really love go, I really hate vendoring and up until now I didn't really get, why anyone would think go should need something like that. After all, go seems to be predestined to be used with automatically checked semantic versioning. You can enumerate all possible changes to an API in go and the list is quite short. By looking at vcs-tags giving semantic versions and diffing the API, you can automatically check that you never break compatibility (the go compiler and stdlib actually do something like that). Heck, in theory you could even write a package manager that automatically (without any further annotations) determines the latest version of a package that still builds all your stuff or gives you the minimum set of packages that need changes to reconcile conflicts.

This thought lead me to contemplate what makes an API change a breaking change. After a bit of thought, my conclusion is that almost every API change is a breaking change, which might surprise you.

For this discussion we first need to make some assumptions about what constitutes breakage. We will use the go1 compatibility promise. The main gist is: Stuff that builds before is guaranteed to build after. Notable exceptions (apart from necessary breakages due to security or other bugs) are unkeyed struct literals and dot-imports.

[edit] I should clarify, that whenever I talk about an API-change, I mean your exported API as defined by Code (as opposed to comments/documentation). This includes the public identifiers exported by your package, including type information. It excludes API-requirements specified in documentation, like on io.Writer. These are just too complex to talk about in a meaningful way and must be dealt with separately anyway. [/edit]

So, given this definition of breakage, we can start enumerating all the possible changes you could do to an API and check whether they are breaking under the definition of the go1 compatibility promise:

Adding func/type/var/const at package scope

This is the only thing that seems to be fine under the stability guarantee. It turns out the go authors thought about this one and put the exception of dot-imports into the compatibility promise, which is great.

dot-imports are imports of the form . import "foo". They import every package-level identifier of package foo into the scope of the current file.

Absence of dot-imports means, every identifier at your package scope must be referenced with a selector-expression (i.e. foo.Bar) which can't be redeclared by downstream. It also means that you should never use dot-imports in your packages (which is a bad idea for other reasons too). Treat dot-imports as a historic artifact which is completely deprecated. An exception is the need to use a separate foo_test package for your tests to break dependency cycles. In that case it is widely deemed acceptable to . import "foo" to save typing and add clarity.

Removing func/type/var/const at package scope

Downstream might use the removed function/type/variable/constant, so this is obviously a breaking change.

Adding a method to an interface

Downstream might want to create an implementation of your interface and try to pass it. After you add a method, this type doesn't implement your interface anymore and downstreams code will break.

Removing a method from an interface

Downstream might want to call this method on a value of your interface type, so this is obviously a breaking change.

Adding a field to a struct

This is perhaps surprising, but adding a field to a struct is a breaking change. The reason is, that downstream might embed two types into a struct. If one of them has a field or method Bar and the other is a struct you added the Field Bar to, downstream will fail to build (because of an ambiguous selector expression).

So, e.g.:

// foo/foo.go
package foo

type Foo struct {
    Foo string
    Bar int // Added after the fact
}

// bar/bar.go
package bar

type Baz struct {
    Bar int
}

type Spam struct {
    foo.Foo
    Baz
}

func Eggs() {
    var s Spam
    s.Bar = 42 // ambiguous selector s.Bar
}

This is what the compatibility might refer to with the following quote:

Code that uses unkeyed struct literals (such as pkg.T{3, "x"}) to create values of these types would fail to compile after such a change. However, code that uses keyed literals (pkg.T{A: 3, B: "x"}) will continue to compile after such a change. We will update such data structures in a way that allows keyed struct literals to remain compatible, although unkeyed literals may fail to compile. (There are also more intricate cases involving nested data structures or interfaces, but they have the same resolution.)

(emphasis is mine). By "the same resolution" they might refer to only accessing embedded Fields via a keyed selector (so e.g. s.Baz.Bar in above example). If so, that is pretty obscure and it makes struct-embedding pretty much useless. Every usage of a field or method of an embedded type must be explicitly Keyed, which means you can just not embed it after all. You need to write the selector and wrap every embedded method anyway.

I hope we all agree that type embedding is awesome and shouldn't need to be avoided :)

Removing a field from a struct

Downstream might use the now removed field, so this is obviously a breaking change.

Adding a method to a type

The argument is pretty much the same as adding a field to a struct: Downstream might embed your type and suddenly get ambiguities.

Removing a method from a type

Downstream might call the now removed method, so this is obviously a breaking change.

Changing a function/method signature

Most changes are obviously breaking. But as it turns out you can't do any change to a function or method signature. This includes adding a variadic argument which looks backwards compatible on the surface. After all, every call site will still be correct, right?

The reason is, that downstream might save your function or method in a variable of the old type, which will break because of nonassignable types.

Conclusion

It looks to me like anything that isn't just adding a new Identifier to the package-scope will potentially break some downstream. This severely limits the kind of changes you can do to your API if you want to claim backwards compatibility.

This of course doesn't mean that you should never ever make any changes to your API ever. But you should think about it and you should clearly document, what kind of compatibility guarantees you make. When you do any changes named in this document, you should check your downstreams, whether they are affected by it. If you claim a similar level of compatibility as the go standard library, you should definitely be aware of the implications and what you can and can't do.

We, the go community, should probably come up with some coherent definition of what changes we deem backwards compatible and which we don't. A tool to automatically looks up all your (public) importerts on godoc.org, downloads the latest version and tries to build them with your changes should be fairly simple to write in go (and may even already exist). We should make it a standard check (like go vet and golint) for upstream package authors to do that kind of thing before push to prevent frustrated downstreams.

Of course there is still the possibility, that my reading of the go1 compatibility promise is wrong or inaccurate. I would welcome comments on that, just like on everything else in this post :)

29. July 2015 01:10:11

20. July 2015

RaumZeitLabor

MRMCD 2015: Faire & Soziale Sportkleidung

Liebe Sportsfreunde,

wie immer gibt es zu den MRMCD auch Veranstaltungstextilien (ausschließlich!) im Vorverkauf. Damit ihr wisst, was euch dieses Jahr erwartet, haben wir eine kleine Vorschau gebastelt:

MRMCD15 T-Shirt

MRMCD15 Hoodie

Bedruckt werden die Textilien im Siebdruckverfahren bei 7 Siebe, einem Arbeitsprojekt des Caritasverbandes Stuttgart zur beruflichen Wiedereingliederung ehemaliger Drogenabhängiger. Dieses Ziel unterstützen wir gerne. Auch wenn das für uns etwas teurer als die Selbstproduktion der vergangenen Jahre ist bleiben die Preise für euch gleich; T-Shirts kosten also weiterhin 15€ und Hoodies 35€.

Genau wie letztes Jahr verwenden wir auch dieses mal wieder fair gehandelte Textilien von Stanley&Stella. Die T-Shirts sind “Stanley Leads” bzw. “Stella Wants”; als Hoodies bieten wir “Stanley Knows” und “Stella Says” an.

Die Stanleys (Herrenschnitt) gibt es dabei in XS bis 3XL, die Stellas (Damenschnitt) in XXS bis XXL.

Mit eurem Veranstaltungs-T-Shirt oder -Hoodie unterstützt ihr also nicht nur die Resozialisierung von ehemaligen Drogenabhängigen sondern auch menschenwürdige Bedingungen in der Textilproduktion.

Die MRMCD finden dieses Jahr vom 4. bis 6. September in der Hochschule Darmstadt statt. Wie jedes Jahr bietet sie ein breites Programm an Vorträgen und eine entspannte Stimmung, um sich mit interessanten Menschen auszutauschen. Alle weiteren Infos gibt es unter mrmcd.net.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

20. July 2015 00:00:00

18. July 2015

sECuREs Webseite

ViewSonic VX2475Smhl-4K HiDPI display on Linux

I have been using a Dell UP2414Q monitor for a little over a year now. The Dell UP2414Q was the first commercially available display that qualified as what Apple calls a Retina Display, meaning it has such a high resolution that you cannot see the individual pixels in normal viewing distance. In more technical terms, this is called a HiDPI display. To be specific, Dell’s UP2414Q has a 527mm wide and 296mm high screen, hence it has 185 dpi (3840 px / (527 mm / 25.4 mm)). I configured my system to use 192 dpi instead of the actual 185 dpi, because 192 is a clean multiple of 96 dpi, so scaling gets easier for software.

The big drawback of the Dell UP2414Q, and all other 4K displays with sufficiently high dpi, is that the display uses multiple scalers (also called multiple tiles) internally. This makes use of a DisplayPort feature called Multiple Stream Transport (MST): in layman’s terms, the display communicates to the graphics card that there are actually two displays with a resolution of 1920x1080 px each, both connected via the same DisplayPort cable. The graphics card then needs to split each frame into two halves and send them over the DisplayPort connection to the monitor. The reason all display vendors used this technique is that at the time, there simply were no scalers on the market which supported a resolution of 3840x2160 px.

Driver support for tiled displays

The problem with MST is that it’s poorly supported in the Linux ecosystem: for the longest time, the only driver supporting it at all was the closed-source nVidia driver, and you had to live without RandR when using it. With linux 4.1, MST support was added for the radeon driver, but I’m not sure if that is all that’s necessary to support 4K MST displays as there are other use-cases for MST. The intel driver still doesn’t have any MST support whatsoever, as of now (linux 4.1).

(RandR) Software support for tiled displays

Regardless of the driver you are using, you’ll need the very latest RandR 1.5, otherwise software will just see multiple monitors instead of one big monitor. Keith Packard published a blog post with a proposal to address this shortcoming, and the actual implementation work was included in the randrproto 1.5 release at 2015-05-17. I think it will take a while until all relevant software is updated, with your graphics driver, the X server and your desktop environment being the most important pieces of software. It’s unclear to me when/if Wayland will support tiled 4K displays.

Having to disable RandR means you’ll be unable to use tools like redshift to dim your monitor’s backlight, and you won’t be able to reconfigure or rotate your monitor without restarting your X session. Especially on a laptop, this is a big deal.

The ViewSonic VX2475Smhl-4K

I’m not sure why ViewSonic chose such a long product name, especially when comparing it with the competition’s names like HP z24s or BenQ BL2420U. This makes it pretty hard to talk about the product in real life, because nobody is going to remember that cryptic name. In that sense, it’s a good thing I won’t need to recommend this product.

Let’s start with the positive, and the main reason why I bought this monitor: the screen itself is great. It entirely fulfills my needs for extended periods of office work and occasionally watching a movie. I don’t play games on this monitor. With regards to connectivity, it comes with 2 HDMI ports (one of which is MHL-capable for connecting smartphones and tablets) and 1 DisplayPort. Since most graphics cards don’t support HDMI 2.0 yet (which you need for the native resolution of 3840x2160px at 60Hz), I am driving the monitor using DisplayPort, which works perfectly fine so far.

Unfortunately, the screen is the only good thing about this entire monitor. If you are using it in an office setting, you might be used to a lot more comfort. Here is a list of shortcomings, sorted by how severe I think each issue is:

  1. The monitor does not contain a USB hub at all. This is a big shortcoming I can’t understand at all. From plugging in wireless receivers for mouse/keyboard over Yubikeys for second-factor authentification to the occasional USB thumb drives, I don’t understand how anyone would not see the lack of USB ports as a big minus.
  2. The case of the monitor is painted in a glossy black, reflecting light. This is ironic, since the screen itself is matte, but you still have light reflections in your field of vision. I’ll need to see what I can do about that.
  3. The monitor feels a lot cheaper than other monitors. The stand it comes with is flimsy and does not allow for rotating the monitor or adjusting the height at all, so I’ve already ordered an Ergotron LX 45-241-026 to replace the stand. The buttons to power off/on and navigate the on-screen display don’t feel comfortable, and the power LED is bright blue, reflecting multiple times in the glossy stand.

Using the ViewSonic VX2475Smhl-4K with Linux

Using DisplayPort with the nouveau open-source driver

Connecting the ViewSonic VX2475Smhl-4K to my Gainward GeForce GTX 660 using DisplayPort works perfectly fine with the nouveau open-source driver version 1.0.11, meaning the driver detects the full native resolution of 3840x2160 px with a refresh rate of 60 Hz and does not use YUV 4:2:0 compression. As I understand it, you need to have a graphics card with DisplayPort 1.2 or newer in order to achieve 3840x2160 px at 60 Hz. Here’s the output of xrandr:

$ xrandr
Screen 0: minimum 320 x 200, current 3840 x 2160, maximum 8192 x 8192
DVI-I-1 disconnected (normal left inverted right x axis y axis)
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-1 connected 3840x2160+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   3840x2160     60.00*+  30.00    25.00    24.00    29.97    23.98  
   1920x1080     60.00    50.00    59.94  
   1920x1080i    60.00    50.00    59.94  
   1600x1200     60.00  
   1680x1050     59.95  
   1400x1050     59.98  
   1600x900      59.98  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.08    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DVI-D-1 disconnected (normal left inverted right x axis y axis)

Using DisplayPort with the nVidia closed-source driver

Connecting the ViewSonic VX2475Smhl-4K to my Gainward GeForce GTX 660 using DisplayPort works perfectly fine with the nVidia closed-source driver version 352.21 (haven’t tested it with other versions), meaning the driver detects the full native resolution of 3840x2160 px with a refresh rate of 60 Hz and does not use YUV 4:2:0 compression. As I understand it, you need to have a graphics card with DisplayPort 1.2 or newer in order to achieve 3840x2160 px at 60 Hz. Here’s the output of xrandr:

Screen 0: minimum 8 x 8, current 3840 x 2160, maximum 16384 x 16384
DVI-I-0 disconnected primary (normal left inverted right x axis y axis)
DVI-I-1 disconnected (normal left inverted right x axis y axis)
HDMI-0 disconnected (normal left inverted right x axis y axis)
DP-0 disconnected (normal left inverted right x axis y axis)
DVI-D-0 disconnected (normal left inverted right x axis y axis)
DP-1 connected 3840x2160+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   3840x2160     60.00*+  29.97    25.00    23.98  
   1920x1080     60.00    59.94    50.00    60.00    50.04  
   1680x1050     59.95  
   1600x1200     60.00  
   1600x900      60.00  
   1440x900      59.89  
   1400x1050     59.98  
   1280x1024     75.02    60.02  
   1280x720      60.00    59.94    50.00  
   1024x768      75.03    70.07    60.00  
   800x600       75.00    72.19    60.32    56.25  
   720x576       50.00  
   720x480       59.94  
   640x480       75.00    72.81    59.94    59.93  

And here’s the relevant block of verbose log output generated by the nVidia driver in /var/log/Xorg.0.log when starting X11 with -logverbose 6:

[ 87934.515] (II) NVIDIA(GPU-0): --- Building ModePool for ViewSonic VX2475 SERIES (DFP-4) ---
[ 87934.515] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[ 87934.515] (II) NVIDIA(GPU-0):     Mode Source: EDID
[ 87934.515] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[ 87934.515] (II) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[ 87934.515] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[ 87934.515] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[ 87934.515] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[ 87934.515] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[ 87934.515] (II) NVIDIA(GPU-0):       H/V Polarity     : +/-
[ 87934.515] (II) NVIDIA(GPU-0):     Viewport                 3840x2160+0+0
[ 87934.515] (II) NVIDIA(GPU-0):       Horizontal Taps        0
[ 87934.515] (II) NVIDIA(GPU-0):       Vertical Taps          0
[ 87934.515] (II) NVIDIA(GPU-0):       Base SuperSample       x1
[ 87934.515] (II) NVIDIA(GPU-0):       Base Depth             32
[ 87934.515] (II) NVIDIA(GPU-0):       Distributed Rendering  1
[ 87934.515] (II) NVIDIA(GPU-0):       Overlay Depth          32
[ 87934.515] (II) NVIDIA(GPU-0):     Mode "3840x2160_60" is valid.

Using HDMI with the nVidia closed-source driver

In order to drive the ViewSonic VX2475Smhl-4K with its native resolution of 3840x2160 px at a refresh rate of 60 Hz, you’ll need to have a graphics card that supports HDMI 2.0. As of 2015-07-17, the only cards I can find that feature HDMI 2.0 are nVidia’s Maxwell cards (NV110), e.g. the models GeForce GTX 960, 970 or 980. These models need a signed firmware blob, which nVidia has not yet released, hence you cannot use them with the open-source nouveau driver at all. I’ll not buy them until this issue is resolved.

Even though I know that it’s not supported, I was curious to see what happens when I try to connect the display to my GeForce GTX 660, which only has HDMI 1.4.

With the nVidia closed-source driver version 346.47, by default, you will end up with a resolution of 1920x1080 px. The X11 logfile /var/log/Xorg.0.log contains the following verbose log output when starting X11 with -logverbose 6:

[  8265.425] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160":
[  8265.425] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[  8265.425] (II) NVIDIA(GPU-0):     Mode Source: EDID
[  8265.425] (II) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[  8265.425] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[  8265.425] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[  8265.425] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[  8265.425] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[  8265.425] (II) NVIDIA(GPU-0):       H/V Polarity     : +/-
[  8265.425] (WW) NVIDIA(GPU-0):     Mode is rejected: PixelClock (533.2 MHz) too high for
[  8265.426] (WW) NVIDIA(GPU-0):     Display Device (Max: 340.0 MHz).

[  8265.429] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160":
[  8265.429] (II) NVIDIA(GPU-0):     3840 x 2160 @ 30 Hz
[  8265.429] (II) NVIDIA(GPU-0):     Mode Source: EDID
[  8265.429] (II) NVIDIA(GPU-0):       Pixel Clock      : 296.70 MHz
[  8265.429] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 4016
[  8265.429] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 4104, 4400
[  8265.429] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2168
[  8265.429] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2178, 2250
[  8265.429] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[  8265.429] (WW) NVIDIA(GPU-0):     Mode is rejected: Mode requires YUV 4:2:0 compression.

When setting Option "ModeValidation" "AllowNonEdidModes" and configuring the custom modeline Modeline "3840x2160" 307.00 3840 4016 4104 4400 2160 2168 2178 2250 +hsync +vsync, you can get a resolution of 3840x2160 px, but a refresh rate of only 30 Hz. Such a low refresh rate is only okay for watching movies — any sort of regular computer work is very inconvenient, as the mouse pointer is severely jumpy/lagging.

Since driver version 349.12, nVidia added support for YUV 4:2:0 compression. See this anandtech article about how nVidia cards achieve 4k@60 Hz over HDMI 1.4 and the wikipedia article on chroma subsampling in general.

I upgraded my driver to version 352.21, and indeed, by default, it will now drive the monitor with 3840x2160 px at a refresh rate of 60 Hz, but using YUV 4:2:0 compression. This compression is immediately visible as the picture quality is so much worse. You can even see it in simple things such as a GMail tab. To me, it looks similar to when you accidentally misconfigure your system to use 16-bit colors instead of 24-bit colors. I recommend you try to avoid YUV 4:2:0 compression as much as possible, unless maybe if you’re just trying to watch movies and aren’t interested in best quality.

With version 352.21, the X11 logfile /var/log/Xorg.0.log contains the following verbose log output when starting X11 with -logverbose 6:

[   123.402] (WW) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[   123.402] (WW) NVIDIA(GPU-0):     Mode Source: EDID
[   123.402] (WW) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[   123.402] (WW) NVIDIA(GPU-0):       Pixel Clock      : 533.25 MHz
[   123.402] (WW) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 3888
[   123.402] (WW) NVIDIA(GPU-0):       HSyncEnd, HTotal : 3920, 4000
[   123.402] (WW) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2163
[   123.402] (WW) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2168, 2222
[   123.402] (WW) NVIDIA(GPU-0):       H/V Polarity     : +/-
[   123.402] (WW) NVIDIA(GPU-0):     Mode is rejected: PixelClock (533.2 MHz) too high for
[   123.402] (WW) NVIDIA(GPU-0):     Display Device (Max: 340.0 MHz).
[   123.402] (WW) NVIDIA(GPU-0):     Mode "3840x2160_60" is invalid.

[   123.406] (II) NVIDIA(GPU-0):   Validating Mode "3840x2160_60":
[   123.406] (II) NVIDIA(GPU-0):     Mode Source: EDID
[   123.406] (II) NVIDIA(GPU-0):     3840 x 2160 @ 60 Hz
[   123.406] (II) NVIDIA(GPU-0):       Pixel Clock      : 593.41 MHz
[   123.406] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 3840, 4016
[   123.406] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 4104, 4400
[   123.406] (II) NVIDIA(GPU-0):       VRes, VSyncStart : 2160, 2168
[   123.406] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal : 2178, 2250
[   123.406] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[   123.406] (II) NVIDIA(GPU-0):     Viewport                 1920x2160+0+0
[   123.406] (II) NVIDIA(GPU-0):       Horizontal Taps        0
[   123.406] (II) NVIDIA(GPU-0):       Vertical Taps          0
[   123.406] (II) NVIDIA(GPU-0):       Base SuperSample       x1
[   123.406] (II) NVIDIA(GPU-0):       Base Depth             32
[   123.406] (II) NVIDIA(GPU-0):       Distributed Rendering  1
[   123.406] (II) NVIDIA(GPU-0):       Overlay Depth          32
[   123.406] (II) NVIDIA(GPU-0):     Mode "3840x2160_60" is valid.

Conclusion

With its single scaler, the ViewSonic VC2475Smhl-4K works just fine on Linux when using DisplayPort, which is a big improvement over the finicky Dell UP2414Q. Everything else about this monitor is pretty bad, so I would recommend you have a close look at the competition’s models, which as of 2015-07-17 are the HP z24s and the BenQ BL2420U. Neither of these are currently available in Switzerland, so it will take a while until I have the possibility to review either of them.

by Michael Stapelberg at 18. July 2015 00:30:00

17. July 2015

Mero’s Blog

Lazy evaluation in go

tl;dr: I did lazy evaluation in go

A small pattern that is usefull for some algorithms is lazy evaluation. Haskell is famous for making extensive use of it. One way to emulate goroutine-safe lazy evaluation is using closures and the sync-package:

type LazyInt func() int

func Make(f func() int) LazyInt {
    var v int
    var once sync.Once
    return func() int {
        once.Do(func() {
            v = f()
            f = nil // so that f can now be GC'ed
        })
        return v
    }
}

func main() {
    n := Make(func() { return 23 }) // Or something more expensive…
    fmt.Println(n())                // Calculates the 23
    fmt.Println(n() + 42)           // Reuses the calculated value
}

This is not the fastest possible code, but it already has less overhead than one would think (and it is pretty simple to deduce a faster implementation from this). I have implemented a simple command, that generates these implementations (or rather, more optimized ones based on the same idea) for different types.

This is of course just the simplest use-case for lazynes. In practice, you might also want Implementations of Expressions

func LazyAdd(a, b LazyInt) LazyInt {
    return Make(func() { return a() + b() })
}

or lazy slices (slightly more complicated to implement, but possible) but I left that for a later improvement of the package (plus, it makes the already quite big API even bigger) :)

17. July 2015 19:31:10

11. July 2015

RaumZeitLabor

Fortschritt CNC-Fräse

Anschlüsse von außen Gehäuse von innen

Ich habe an der CNC-Fräse weitergebaut. Sie hat nun Steckverbinder für alle 3 Achsen sowie die Spindel bekommen. Es fehlen jetzt noch die Anschlüsse auf der Gegenseite sowie die Anschlüsse für die Endstops. Ich hoffe, dass ich das bis Ende nächster Woche erledigt habe und die CNC-Maschine dann für die Benutzung freigeben kann.

Außerdem ist der Fräsmotor für die Proxxon-Fräse angekommen, den DaFo und ich eingebaut haben. Funktioniert 1a.

Außerdem sind die Bauteile für die E-Ecke angekommen. Die PSK-Steckverbinder sind somit auch da, allerdings gibt es beim Crimpen ein bisschen was zu beachten (richtige Seite). Wer dazu fragen hat, kann sich gerne an mich wenden, dann zeige ich euch, wie man die Stecker am besten crimpt.

11. July 2015 22:11:50

MRMCD 2015: Der Vorverkauf läuft

MRMCD 2015 Plakat

Liebe Sportsfreunde,

der Vorverkauf für die diesjährigen MRMCD hat begonnen! Ab sofort könnt ihr unter mrmcd.net/presale Tickets und Merchandise für die MRMCD vorbestellen. Auch wenn es eine Abendkasse geben wird, bitten wir euch, den Vorverkauf zu nutzen, damit wir Planungssicherheit erhalten und eine tolle Veranstaltung vorbereiten können. Bei allen Tickets ist unser bekannt reichhaltiges Rund-um-die-Uhr-Sportler-Frühstück enthalten.

Wichtig: Sämtliches Merchandise, wie z.B. die (wie immer legendäre) Tasse und weitere Überraschungsgadgets gibt es nur für Tickets aus dem Vorverkauf!

Gleichzeitig freuen wir uns noch über Einreichungen zu Vorträgen zu IT-Security, Software, Open Source, Eingebetteten Systemen, Netzwerken, Anonymität und allen möglichen anderen verwandten Themen. Außerdem wollen wir dieses Jahr auch Wettkämpfe in traditionellen? Nerd-Sportarten wie Extrem-Kaktus-vertrocknen-lassen, Smartphone-Ertränken oder Tastaturfechten austragen; auch da suchen wir noch gute Ideen. Über eure Einreichungen freuen wir uns direkt im frab.

Die MRMCD finden dieses Jahr vom 4. bis 6. September in der Hochschule Darmstadt statt. Wie jedes Jahr bietet sie ein breites Programm an Vorträgen und eine entspannte Stimmung, um sich mit interessanten Menschen auszutauschen. Alle weiteren Infos gibt es unter mrmcd.net.

Mit sportlichen Grüßen,

die Wettkampfleitung der MRMCD15

11. July 2015 00:00:00

08. July 2015

RaumZeitLabor

Garnständer für große Spulen selbst bauen

Von den viel benutzten Garnen haben wir inzwischen einige auf 5km-Spulen, aber bisher keine gute Lagermöglichkeit. Deshalb habe ich einen Garnständer für große Spulen gebaut:

Fertiger Garnständer

Im Wiki gibt es eine Anleitung zum Nachmachen.

08. July 2015 00:00:00

06. July 2015

RaumZeitLabor

Überlappungsfreie Vektorgraphiken herstellen

Zu bearbeitende Vektorgraphik

Ich benutze regelmäßg Vektorgraphiken, um mit Hilfe von Schneidplotter, Laser oder Stickmaschine Ideen zu realisieren. Leider kommen alle drei nicht mit allen SVG-Features klar. Deshalb habe ich in der Vergangenheit immer wieder viel Zeit darauf verwendet, Vektographiken für die jeweiligen Geräte aufzubereiten. Irgendwann wurde mir das zu langweilig, also habe ich angefangen neue Features in Inkscape einzubauen, um möglichst viel Handarbeit zu automatisieren.

Damit noch mehr Leute sich die Handarbeit sparen können habe ich eine Anleitung geschrieben.

06. July 2015 00:00:00

02. May 2015

koebi

Setting Trackpoint speed via systemd

Disclaimer This post does not offer the truth in how trackpoint speed and sensitivity should be set. It just shows how I am doing it at the moment. Since this is the first time I wrote a systemd-service-file, or anything like this, I am very happy to receive any feedback on mistakes I may have made or anything else. Why trackpoint speed/sensitivity matters (to me) I am currently using a Thinkpad X220i with a heavily keyboard-based setup.

02. May 2015 00:00:00

13. April 2015

Mero’s Blog

Difficulties making SQL based authentication resilient against timing attacks

I've been thinking about how to do an authentication scheme, that uses some kind of relational database (it doesn't matter specifically, that the database is relational, the concerns should pretty much apply to all databases) as a backing store, in a way that is resilient against timing side-channel attacks and doesn't leak any data about which usernames exist in the system and which don't.

The first obvious thing is, that you need to do a constant time comparison of password-hashes. Luckily, most modern crypto libraries should include something like that (at least go's bcrypt implementation comes with that).

But now the question is, how you prevent enumerating users (or checking for existence). A naive query will return an empty result set if the user does not exists, so again, obviously, you need to compare against some password, even if the user isn't found. But just doing, for example

if result.Empty {
    // Compare against a prepared hash of an empty password, to have constant
    // time check.
    bcrypt.CompareHashAndPassword(HashOfEmptyPassword, enteredPassword)
} else {
    bcrypt.CompareHashAndPassword(result.PasswordHash, enteredPassword)
}

won't get you very far. Because (for example) the CPU will predict either of the two branches (and the compiler might or might not decide to "help" with that), so again an attacker might be able to distinguish between the two cases. The best way, to achieve resilience against timing side-channels is to make sure, that your control flow does not depend on input data at all. Meaning no branch or loop should ever take in any way into account, what is actually input into your code (including the username and the result of the database query).

So my next thought was to modify the query to return the hash of an empty password as a default, if no user is found. That way, your code is guaranteed to always get a well-defined bcrypt-hash from the database and your control flow does not depend on whether or not the user exists (and an empty password can be safely excluded in advance, as returning early for that does not give any new data to the attacker).

Which sounds well, but now the question is, if maybe the timing of your database query tells the attacker something. And this is where I hit a roadblock: If the attacker knows enough about your code (i.e. what database engine you are using, what machine you are running on and what kind of indices your database uses) they can potentially enumerate users by timing your database queries. To illustrate: If you would use a simple linear list as an index, a failed search has to traverse the whole list, whereas a successfull search will abort early. The same issue exists with balanced trees. An attacker could potentially hammer your application with unlikely usernames and measure the mean time to answer. They can then test individual usernames and measure if the time to answer is significantly below the mean for failures, thus enumerating usernames.

Now, I haven't tested this for practicality yet (might be fun) and it is pretty likely that this can't be exploited in reality. Also, the possibility of enumerating users isn't particularly desirable, but it is also far from a security meltdown of your authentication-system. Nevertheless, the idea that this theoretical problem exists makes me uneasy.

An obvious fix would be to make sure, that every query always has to search the complete table on every lookup. I don't know if that is possible, it might be just trivial by not giving a limit and not marking the username column as unique, but it might also be hard and database-dependent because there will still be an index over this username column which might still create the same kind of issues. There will also likely still be a variance, because we basically just shifted the condition from our own code into the DBMS. I have simply no idea.

So there you have it. I am happy to be corrected and pointed to some trivial design. I will likely accept the possibity of being vulnerable here, as the systems I am currently building aren't that critical. But I will probably still have a look at how other projects are handling this. And maybe if there really is a problem in practice.

13. April 2015 02:49:53

10. April 2015

koebi

Connecting an iPod Shuffle

Backstory A few years ago, driving home from a night drinking in the old town of Heidelberg, I stumbled upon an iPod shuffle of the 4th generation. Being the drunk and stupid guy I was back then, I didn’t think much, and just took it home - just to wake up the other day recognizing I was now the proud owner of an iPod. Yay. Of course, now, that I had that thing, I wanted to use it.

10. April 2015 00:00:00

08. February 2015

Raphael Michel

Erfahrungen mit Django

Der Patrick hat mich gebeten, Dinge zu Django aufzuschreiben, die über den grundlegenden Workshop, den ich neulich gehalten habe, hinausgehen. Ich weiß nicht so ganz, was ich dazu schreiben soll und wie viel davon tatsächlich interessant ist, aber ich habe einfach mal den Code meiner abgeschlossenen und laufenden Django-Projekte durchforstet, und dabei aufgeschrieben, was mir so schien, als könnte es anderen Entwicklern nützlich sein. Ich habe das ganze sehr lose in verschiedene Abschnitte gegliedert, aber ansonsten nicht weiter sortiert.

Generell

Benutzt Python 3. Bitte. Und Django 1.7+. Danke. Und lest mehr in der Doku, sie ist fast vollständig lesenswert. :-)

Projektstruktur

Django bietet die Möglichkeit, das Projekt in sogenannte Apps aufzuteilen. Die Idee dahinter ist hauptsächlich die der Wiederverwendbarkeit von Apps, die ich in der Realität bisher eher für unrealistisch halte, aber dennoch kann es sinnvoll sein, diese Möglichkeit zu nutzen. Ich habe in verschiedenen Projekten bisher verschiedene Ansätze ausprobiert, angefangen bei einem monolithischen Modell mit einer gigantischen App, sowie zusätzlichen, "kleinen" Apps für Meta-Funktionen wie Analyse und Statistik, bis hin zu einer Aufteilung in mehrere austauschbare Apps (z.B. eine für Backend und eine für Frontend), die nicht voneinander abhängen dürfen und einer App mit gemeinsamen Funktionen und Models, von der alle anderen abhängen. Verschiedene Apps für verschiedene Features sind sicher oft sinnvoll, bei mir aber meist nicht realistisch, da die Features zu stark zusammenhängen und ineinander integriert sind.

Welcher Ansatz sinnvoll ist, hängt wohl sehr vom Projekt ab, aber ich fahre bis jetzt mit letztere Aufbau am besten.

Standardmäßig legt Django pro App eine views.py, eine urls.py, ein eine tests.py und eine models.py an. Während man bei urls und models oft sehr lange mit einer Datei auskommt, ist es bei views und tests nützlich zu wissen, dass es Django nicht stört, wenn man daraus ein Subpackage macht und das in mehrere Dateien aufteilt.

Debugging

Zum Debugging empfehle ich die allseits bekannte Django Debug Toolbar, die mir nicht nur beim Debugging selbst sehr nützlich ist, sondern vor allem auch dafür, das ORM im Auge zu behalten: Wenn man unachtsam Code schreibt, passiert es einem leider leicht, dass man extrem viele Datenbankqueries produziert. Das Problem fällt meist in der Entwicklungsphase, wenn man mit kleinen Testdatensätzen arbeitet, oft nicht auf aber verursacht dann riesige Last, sobald man in Produktion geht. Faustregel: Jede Aktion sollte eine konstante (bzw. konstant nach oben beschränkte) Anzahl Datenbankqueries verursachen, die unabhängig von der Anzahl der Datensätze sein sollte.

In Produktion ist dann, wenn User sich beschweren, django-impersonate ein nützliches Tool, um Fehler nachzuvollziehen.

Statische Dateien

Bei jedem größeren Projekt wollt ihr wahrscheinlich statt CSS-Code lieber LESS oder SASS schreiben. Der django-compressor regelt das für euch genauso, wie er CSS und JavaScript komprimieren kann. Im Entwicklungsmodus tut er das live und transparent, in Produktion kann man ihn vorkompilieren lassen.

Models

Schaut euch die Dokumentation zu models.ForeignKey an und behaltet im Kopf, dass die Standardeinstellung für on_delete auf CASCADE steht: Wenn ihr ein referenziertes Objekt löscht, werden alle referenzierenden Objekte mitgelöscht. Das kann einem den Kopf kosten (für euch getestet) und man ist in fast allen fällen mit PROTECT oder SET_NULL besser beraten.

Authentifizierung

Seit einiger Zeit macht Django es recht einfach möglich, eigene User-Models statt dem mitgelieferten django.contrib.auth.models.User zu verwenden. Macht von dieser Möglichkeit früh Gebrauch – wenn der Bedarf erst später entsteht, ist eine Migration im Produktivbetrieb zwar möglich, aber ziemlich aufwändig1.

Signals

Das Signal-Framework von Django ist viel mächtiger als es auf den ersten Blick aussieht. Hierzu gibts wahrscheinlich irgendwann einen Vortrag oder Extra-Blogpost von mir.

Hintergrundprozesse

Es gibt fast keinen Fall, in dem es gerechtfertigt ist, den User auf eine Antwort warten zu lassen. Man sollte daher tunlichst vermeiden, irgendetwas, das länger dauern kann, in einem View zu tun. Ich verwende Celery als Task Queue für möglichst alles, was länger dauern kann oder von externen Erreichbarkeiten abhängt, sei es der Export vieler Daten oder das bloße Versenden von E-Mails.

Für Cronjob-artige Dinge verwendete ich früher django-cron, aber muss mich nach einer Alternative umsehen oder es auf Python 3 portieren.

Templates

Es ist eine Überlegung wert, Djangos Template Engine durch Jinja2 zu ersetzen. Ich habe dies in einem Projekt getan, weil ich Dinge abbilden musste, die in Djangos TE sehr umständlich gewesen wären und werde es wohl noch öfter tun, wenn Django 1.8 released ist, das bedeutend bessere Unterstützung für externe Template Engines bieten wird.

Views

Ich habe zu Beginn meiner Arbeiten mit Django vernachlässigt, wie mächtig Class-based views sind und sie viel zu wenig verwendet. Ich kann euch nur empfehlen, euch die ganz genau anzuschauen.

Unit Tests

Wenn ihr wirkliche Interface-Tests machen wollt, dann lässt sich Django mithilfe des StaticLiveServerTestcase recht gut mit Selenium verbinden2.

Lokalisierung

Auch wenn ich noch nicht in den zweifelhaften Genuss kam, ein mehrsprachiges Projekt in Produktion zu bringen, kann ich den Tipp geben, dass man das, wenn man es je tun möchte, gar nicht früh genug einplanen kann.


  1. Ja, ich habe auch das für euch getestet. 

  2. In der Dokumentation gibt es ein Beispiel. 

08. February 2015 23:00:00

30. January 2015

koebi

Ein kleiner Trick für Copy-Paste

Hin und wieder stolpere ich über das Problem, Output von der Kommandozeile irgendwo einfügen zu wollen, sei es nun weil ich einen komplizierten Pfad nicht noch mal eingeben will, weil ich irgendwas cat-en und irgendwo einfügen will oder aus irgendeinem anderen Grund. Dann könnte man natürlich irgendwie (“copying the linux way”) die mittlere Maustaste nutzen, aber das ist manchmal fummelig, oder irgendwas anderes. Ich habe als Lösung hierfür vor kurzem das wunderschöne Tool xclip entdeckt.

30. January 2015 00:00:00

A first post/Ein erster Post

A first sign of life… …is exactly what this blog post is. Since I haven’t decided whether to write german or english in this blog, this post exists in both languages. Ein erstes Lebenszeichen… …ist genau das, was dieser Blogpost ist. Da ich mich noch nicht entschieden habe, ob ich in diesem Blog auf Deutsch oder Englisch schreiben werde, existiert dieser Post in beiden Sprachen.

30. January 2015 00:00:00

17. January 2015

Raphael Michel

31c3 Vortragsreviews

Ich habe auf und (vor allem) im Nachgang des 31. Chaos Communication Congress eine sehr große Menge Vorträge geschaut. Da ich immer mal wieder gefragt wurde, welche ich denn empfehlen kann, gibt es hier kurze Meinungsschnippsel zu vielen davon. Der Blogbeitrag wird möglicherweise auch noch still und heimlich erweitert.

Bitte beachten: Diese sind sehr kurz und möglicherweise subjektiv, unreflektiert, falsch, unfair, etc. und geben nur meine Gedanken direkt nach dem Anschauen des Vortrags. Sie sind sehr grob von oben nach unten nach Empfehlung sortiert, aber wirklich nur sehr grob.

  • Reconstructing narratives von Jacob Appelbaum und Laura Poitras

    Für mich der Vortrag des Congress, der den größten Eindruck hinterlassen hat. Die Stimmung im Saal hätte für mehrere Keynotes gereicht. Sollte man auf jeden Fall anschauen.

  • Traue keinem Scan, den du nicht selbst gefälscht hast von David Kriesel über einen katastrophalen Fehler in Xerox-Scannern

    Hervorragender Vortrag, sehr unterhaltsam vorgetragen. Für mich der größte Lacher des Congress. Guckempfehlung!

  • Correcting copywrongs von Julia Reda über Copyright-Reformen in der EU

    Sehr empfehlenswerter Vortrag, aufschlussreich und gut zu gucken.

  • Ich sehe, also bin ich .. Du von starbug über Biometrie

    Auch wenn man die Pressemeldung gelesen hat, hält der Vortrag noch einige Überraschungen und Erklärungen bereit und ist in jedem Fall sehenswert.

  • Mit Kunst die Gesellschaft hacken von Stefan Pelzer und Philipp Ruch über das ZPS

    Eine Selbstvorstellung des Zentrums für politische Schönheit. Wer deren Aktivitäten bisher nicht verfolgt hat, sollte sich das wirklich anschauen, ich fand den Vortrag äußerst inspirierend.

  • Security Analysis of Estonia's Internet Voting System von J. Alex Halderman

    Unterhaltsam und aufschlussreich, ungefähr das, was mane rwarten würde.

  • State of the Onion von Jacob und arma über Tor

    Sehenswert wie jedes Jahr, leider aber nicht wie angekündigt mehr Q&A als sonst.

  • Why are computers so @#!*, and what can we do about it? von Peter Sewell

    Ein sehr unterhaltsamer Rant, der trotzdem noch die Kurve zu Lösungsvorschlägen kriegt. Sehenswert!

  • ECCHacks von Tanja Lange und djb über elliptische Kurven

    Eine sehr gute Einführung in elliptische Kurven, die man tatsächlich verstehen kann, wenn man vorher noch nie davon gehört hat. Man sollte durchaus ein bisschen in Mathematik geübt sein, braucht aber eigentlich nur Schulwissen, um es zu verstehen.

  • Let's build a quantum computer! von Andreas Dewes

    Andreas versucht, Quantencomputer einfach zu erklären und der Vortrag macht viel Spaß und war für mich sehr lehrreich, aber ich fürchte, dass man trotz aller Versuche einen gewissen Physik-Background braucht, um ihn zu verstehen.

  • What Ever Happened to Nuclear Weapons? von Michael Büker

    Eine hauptsächlich geschichtliche und politische Zusammenfassung zu allem rund um Atomwaffen – Vortrag ist gut gehalten und bleibt interessant.

  • Why is GPG „damn near unusable“? von Arne Padmos über Usability

    Lehrreicher Vortrag über Grundlagen von Usability, Probleme in aktueller Crypto-Software, etc..

  • Low Cost High Speed Photography von polygon

    Schöner Vortrag, macht Lust zum Nachbauen.

  • Rocket science – how hard can it be?

    Sehr schönes Projekt und unterhaltsam vorgetragen.

  • Security Analysis of a Full-Body X-Ray Scanner von Eric Wustrov und Hovav Schacham

    Schöne Story, schöner Vortrag, viel mehr kann man nicht sagen.

  • Thunderstrike EFI bootkits for Apple MacBooks von Trammell Hudson

    Ich glaube, der Vortrag ist sehr spannend, aber zu weit weg von allem, womit ich mich auskenne, als dass ich gut hätte folgen können.

  • Source Code and Cross-Domain Autorship Attribution

    Bei weitem nicht der erste Vortrag über Stylometry von diesem Team und wie immer beeindruckend, wie gut man aufgrund von Schreibstilen Autoren erkennen kann – dieses Jahr dann auch bei Source Code.

  • Tell no-one von James Bamford über die NSA

    Eine schöne Geschichtstunde über die Geschichte der NSA, in jedem Fall sehenswert und gut erzählt.

  • Reproducible Builds von Mike Perry, Seth Schoen und Hans Steiner

    Ein gutes Plädoyer, warum man reproduzierbare Builds umbedingt haben will; wird ein bisschen, aber nicht sehr technisch.

  • SS7: Locate. Track. Manipulate von Tobias Engel über Mobilfunk

    Konnte ich leider nur während einer Engelschicht mit halben Ohr genießen können, war aber sehr spannend und ist sicherheitstechnisch sicher eines der Highlights.

  • Let's Encrypt von Seth Schoen über eine automatisierte CA

    Ein hervorragendes Projekt der EFF, gut (und knapp) vorgestellt. Der eigentliche Vortrag geht nur eine halbe Stunde, danach viel Q&A.

  • Crypto Tales form the Trenshes über Kryptografie und Journalismus

    Ich bin kein Freund von Podiumsdiskussionen, aber diese kann man sich durchaus ansehen für einen Realitätsabgleich mit der Welt des Journalismus.

  • „Wir beteiligen uns aktiv an den Diskussionen“ von maha über die „Digitale Agenda“

    Habe ich leider bisher nur zur Hälfte gesehen. War sehr vergleichbar zu den vergangenen Congress-Vorträgen von maha und somit wie immer unterhaltsam.

  • The rise and fall of Internet voting in Norway von Tor Bjørstad

    Eine schöne Wahlcomputergeschichte

  • Inside Field Station Berlin Teufelsberg von Bill Scannell

    Durchaus eine unterhaltsame Geschichtsstunde, aber ich werde noch nicht ganz schlau daraus, was ich von dem Vortrag halte und was er uns eigentlich sagen wollte.

  • Krypto für die Zukunft von ruedi

    Kurz und knackig ein Statusupdate über Kryptographie. Nichts umwerfendes, aber eine gute Zusammenfassung.

  • Damn Vulnerable Chemical Process über industrielle Sicherheit

    Ein interessanter Ausflug in ein mir und den meisten anderen eher fern liegendes Gebiet der IT-Security.

  • Internet of toilets von tbsprs

    Eine Kuriosität unter den Congress-Vorträgen, die man sich nicht entgehen lassen sollte ;-)

  • The Matter of Heartbleed

    Hier verstecken sich gleich zwei Vorträge. Beide sind gut vorgetragen, boten mir inhaltlich nicht wahnsinnig viel neues, aber sind durch die Kürze von je 20 Minuten auch erfrischend anzuhören. Der erste ist von einem Forscher, der direkt nach der bekanntgabe von Heartbleed IPv4-Netz-weite Scans durchgeführt hat und der zweite vom Security-Chef von CloudFlare, die damals eine Belohnung ausgelobt hatten für das extrahieren privater Schlüssel via Heartbleed und dann mal getestet haben, wie viel die Zertifikats-Revoke-Infrastruktur so mitmacht.

  • IFG – Mit freundlichen Grüßen von Stefan Wehrmeyer

    Die ersten 20 Minuten sind ein interessanter Vortrag über ein wichtiges Projekt, danach lässt es etwas nach.

  • Freedom in your computer and in the net von Richard Stallman

    Solange man ihn nicht zu wörtlich und ernst nimmt, ein gut gehaltenear Vortrag. Wenig überraschend gibt es keine neuen Inhalte, aber der Vortrag hat durchaus einen Kuriositätswert.

  • Jahresrückblick des CCC

    Genau wie jedes Jahr :-)

  • Vor Windows 8 wird gewarnt von ruedi

    Hielt für mich inhaltlich keine großen Überraschungen bereit, aber eine schöne, kompakte Zusammenfassung bekannter Probleme.

  • Fnord News Show von Frank und Fefe

    Nicht ganz so lustig wie die letzten Jahre, aber kann man nachts durchaus anschauen :)

  • (In)security of Mobile Banking von Eric Filiol und Paul Irolla

    Durchaus interessant, aber inhaltlich keine extrem spannenden Erkenntnisse und sprachlich anstrengend zu folgen.

  • „Hard Drive Punch“ von Aram Bartholl

    bestand eigentlich nur aus YouTube-Videos und war daher nicht so spannend.

  • Beyond PNR: Exploring airline systems von saper

    Das Abstract klang spannend, aber der Vortrag konnte mich nicht lange fesseln.

  • 31C3 Keynote von Alec Empire über … was eigentlich?

    Kann leider nicht mit den Keynotes der letzten Jahre mithalten.

17. January 2015 23:00:00

06. January 2015

Moredreads Blog

03. January 2015

Raphael Michel

Replacing Skype with SIP

I use Skype a lot in my day-to-day communication but I am not happy with it, as it has poor privacy and a crappy client. The alternative for Voice over IP is, of course, the SIP protocol. SIP clients are also more or less crappy, but thanks to ZRTP we can have strong end-to-end-encryption and thanks to OPUS we can have sound quality at least as good as Skype. Furthermore, SIP is a decentralized protocol and, in a perfect world, does not need any servers at all, because it cann directly call other SIP clients based on their IP address.

In our real world, however, there are annoying things like NAT and dynamic IP addresses, so we'd rather have a SIP server (so one can reach me at a static address) and a proxy (so NAT does not hurt that much). There are of course SIP servers out there, even free ones, but I want to host my own one. The fun part is that people do not need an SIP account at all to call me, it just helps when you want to be called.

As a client, I chose Blink, which has pretty much all the features I need and is a quite nice piece of software (with quite bad packaging on most distributions1), the next-best choice would probably be the platform-independent client Jitsi which I also used for testing.

As a server, I chose Kamailio. Kamailio looks pretty complicated but in comparison to other SIP servers it is really simple as it only does registration and proxying (while the proxying itself is done by rtpproxy) and does not have advanced features like voicemail integrated, although it seems to be possible to integrate them. If you have more complex needs, take a look at Asterisk or FreeSWITCH.

In theory, setting up Kamailio on Debian is pretty simple, but in practice there were several pitfalls and I'm not sure I remember them all, so if you follow the instructions in this blogpost and you stumble over strange error messages, do not hesitate to contact me.

To get started, we install some packages. I did all of this on a relatively clean Debian testing (8.0) system.

apt-get install kamailio kamailio-mysql-modules kamailio-tls-modules mysql-server rtpproxy

This might prompt you to choose a mysql root password, if you did not have mysql-server before. Choose one and write it down. Next, we have to edit a bunch of configuration files, the first of which is /etc/default/kamailio, where we just enable the service:

RUN_KAMAILIO=yes

The next file is /etc/kamailio/kamailio.cfg. We add some flags directly after the first line:

#!define WITH_MYSQL
#!define WITH_AUTH 
#!define WITH_USRLOCDB 
#!define WITH_NAT 
#!define WITH_TLS 

Now we choose a mysql user password (we do not have to create the user) and configure it in the DBURL definition somewhere in the same file:

#!define DBURL "mysql://kamailio:firstpassword@localhost/kamailio"

Now look for the alias option and set it to the hostname you want to use as the server part of your SIP address (think of it like email hosts):

alias="sip.rami.io"

Also, look out for the line configuring rtpproxy's port number and change it to:

modparam("rtpproxy", "rtpproxy_sock", "unix:/var/run/rtpproxy/rtpproxy.sock")

The next file is /etc/kamailio/tls.cfg where you can configure your TLS certificates. You should definitively generate new certificates, but if you just want to get started with testing, you can use the ones provided by Kamailio for now:

private_key = /etc/kamailio/kamailio-selfsigned.key 
certificate = /etc/kamailio/kamailio-selfsigned.pem 

Now we successfully configured the kamailio daemon. However, to control this deamon, there is the kamctl command-line utility which has to be configured in /etc/kamailio/kamctlrc. It also needs the same mysql user and password as configured above as well as a second username/password tuple for a read-only mysql user. You still do not have to create those users or databases by yourself.

SIP_DOMAIN=sip.rami.io
DBENGINE=MYSQL
DBHOST=localhost 
DBNAME=kamailio
DBRWUSER="kamailio"  
DBRWPW="firstpassword"
DBROUSER="kamailioro"
DBROPW="secondpassword"
DBROOTUSER="root"

Now we can use this tool to create the mysql users and database tables:

kamdbctl create

This will ask you for your MySQL root password as well as whether to create one or two optional tables (answer both with yes).

Last but not least we configure rtpproxy in /etc/default/rtpproxy:

USER=kamailio
GROUP=kamailio
CONTROL_SOCK="unix:/var/run/rtpproxy/rtpproxy.sock" 

Yey! Except… there is a bug in the rtpproxy debian package. Open /etc/init.d/rtpproxy and change the line DAEMON=/usr/sbin/rtpproxy to DEAMON=/usr/bin/rtpproxy.

Now start and enable the services (for Debian 7.0 or older, use the init scripts directly):

systemctl enable rtpproxy
systemctl start rtpproxy
systemctl enable kamailio
systemctl start kamailio

Done! You can now setup your SIP client. In Blink, if you are behind the NAT, set up the account, then go to settings, select your account and go to the Server Settings tab. Then enter your hostname as Outpound proxy, select port 5061 and TLS transport. Also make sure to fill in user username and tick the Always use checkbox, if you want to.

Again, it might be that I missed to mention one or two pitfalls I came by, so please write me an email/xmpp message if you get stuck.


  1. The Arch User Repository package for example misses at least python2-eventlib, python2-xcaplib and python2-msrplib as dependencies. 

03. January 2015 23:00:00

14. November 2014

xeens Blog

Remote control Bravia TVs that require authentication

While older Sony Bravia TVs don’t require authentication to receive commands via HTTP, mine does. If you try, it will complain with an “Action not authorized” error. Others [1] [2] have the same problem, but there were no solutions available as of writing.

Luckily the authentication scheme is very simple and can be implemented in a couple of cURL calls. Unfortunately the cURL commands get rather long, so I’ve put them into simple shell scripts. The code is available on GitHub.

Usage: Authentication

  1. Clone the repository: git clone https://github.com/breunigs/bravia-auth-and-remote
  2. Edit auth.sh and enter your TV’s IP address. Also specify your device name (most likely $HOST) and a “nick” for this authentication. It’s only used for identification purposes in the TV’s menu.
  3. Run ./auth.sh. It will make two requests to the TV that are almost identical. The first will fail due to missing authentication, but the TV will display a PIN. Enter that 4-digit code into the still running script and hit enter. The script will repeat the request, but this time with HTTP Basic Authentication without user name, using the password/PIN you just entered.
  4. The script will print an auth= line. This is the cookie that has to be passed to each further request to the TV. When you use cURL, just add --cookie 'auth=…' and your requests should work.

Commands found on other websites will usually work once you add the cookie parameter.

Usage: Basic Remote Control

The repository contains some helpers I found useful. They will automatically read the auth_cookie file that has been created when you ran auth.sh, so you don’t have to specify it each time.

  • ./print_ircc_codes.sh <TV-IP>: Prints list of IRCC (remote control commands) that your TV understands.
  • ./send_command.sh <TV-IP> <IRCC-Command>: Sends an actual command to the TV. Note that it will return immediately, i.e. it does not wait for the TV to actually finish the command. The TV will return an error if the given command is invalid. However, it will report success, even if the command does not make sense. Instead, it will display an OSD message.
  • ./example_goto_media_player.sh: An example which will (most likely) open the media player. Note the sleeps have to be adapted to the action executed.

Other Commands / More Details

You can find some unused commands I extracted from the TCP dump as cURL calls in the commands file.

If you want to reverse engineer some more, it’s obvious you need to intercept the connection between the “TV SideView Sony” app and the TV. A very convenient way is to install tPacketCapture on your Android. It works as a proxy that outputs a .pcap file you can easily inspect with Wireshark. tPacketCapture doesn’t require root.

Enjoy!

14. November 2014 21:30:00

03. October 2014

Raphael Michel

Deploying Django without downtime

One of my active projects is abiapp.net, a SaaS content management tool for the collaborative creation of yearbook contents, primarily targeted to German high scool students. While we currently have few enough users to run all of abiapp.net on a single machine, we have enough users that at any time of the day one of us is working on the project, a user is online. We have a philosophy of pushing new code in very small intervals, days with multiple updates on the same day are nothing uncommon.

However, I do not enjoy pushing new code if I know that all users currently using the website will be interrupted – pretty much everything in our application happens in real-time, so users will surely recognize a downtime of up to a minute. There would have been several possibilities to shorten this time frame but I have decided to work on a solution without any measurable downtime at all.

The general setup

Our application is implemented in Python using Django and runs on a Debian stable virtual machine. The Django application lives inside a gunicorn instance, which is a lightweight HTTP server on top of the WSGI protocol you should use if you do web stuff in Python. On top of all, there is a nginx webserver which servers static content and acts as a proxy for gunicorn. We use MySQL as our database backand and we have a background task queue for long-running tasks powered by Celery and RabbitMQ as a message broker. The gunicorn instance is being controlled by supervisord.

The old deployment setup

We currently deploy using git. We have a git remote on our production server which has a post-receive hook which executed the following tasks:

  • Load the new source code into the working direcotry
  • Make a database backup (we learned this the hard way)
  • Perform database migrations
  • Compile LESS to CSS
  • Compress static files
  • Execute unit tests, just to be sure
  • Reload gunicorn

However, this setup has some very huge problems. The biggest one is that in the moment we load our new code into the working directory, Django will use our new templates and static files even though we are still running on the old python code. This is already bad, but it gets way worse in the unlikely event that the unit tests fail and the new python code is not loaded – then we're stuck in this intermediate state of broken things.

The new deployment setup

We now have two completely independent instances of the application. We have have our git repository three times on the production server:

$ ls app.*
app.src/
app.run.A/
app.run.B/

app.src is the bare git repository we push our changes to and app.run.A and app.run.B are two copies of it used for running the application. The application always runs twice:

$ supervisorctl status
abiapp.A       RUNNING    pid 6184, uptime 0:00:02
abiapp.B       RUNNING    pid 6185, uptime 0:00:02

One of those processes runs with the code and templates from app.run.A, one with the other. They listen on different sockets and supervisord knows them as distinct services.

We also have two copies of our nginx webserver config, one of them pointing to the socket of process A and one to the socket of process B. Only one of them is enabled at the same time:

$ ls /etc/nginx/sites-available/
abiapp.net-A
abiapp.net-B
$ ls /etc/nginx/sites-enabled/
abiapp.net-A

The nginx config

The nginx configuration looks a bit like this:

upstream abiapp_app_server_A {
    server unix:/home/abiapp/run/gunicorn.A.sock fail_timeout=0;
}

server {
    listen 443;
    server_name .abiapp.net;

    # … SSL foo …

    location /static/ { # The static files directory
        alias /var/www/users/abiapp/www.abiapp.net/static.A/;
        access_log off;
        expires 7d;
        add_header Cache-Control public;
        add_header Pragma public;
        access_log off;
    }

    location /media/ {
        # …
    }

    location / { # The application proxy
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        if (!-f $request_filename) {
            proxy_pass http://abiapp_app_server_A;
            break;
        }
        root /var/www/users/abiapp/www.abiapp.net/;
    }

    # error pages …
}

The git hook

So our post-receive hook has to find out which of those two processes currently serves users and replace the other process with the new code. We'll now walk through this git hook pice by piece.

The first part determines which instance is running by looking into a text file containing the string A or B. We'll write the current instance name into this file later in our hook.

#!/bin/bash
if [[ "$(cat /home/abiapp/run/selected)" == "B" ]]; then
    RUNNING="B"
    STARTING="A"
else
    RUNNING="A"
    STARTING="B"
fi

We now activate the virtual python environment our application lives in and move to the application's source code.

unset GIT_DIR
source /home/abiapp/appenv/bin/activate
cd /home/abiapp/app.run.$RUNNING/src

First of all, we'll do a backup, just to be sure. You could also use mysqldump here.

echo "* Backup"
python manage.py dumpdata > ~/backup/dump_$(date +"%Y-%m-%d-%H-%M")_push.json

We now pull the new source code into our current directory.

echo "* Reset server repository"
git reset --hard
git pull /home/abiapp/app.src master || exit 1
cd src

Then, we perform database migrations and deploy our static files.

echo "* Perform database migrations"
python manage.py migrate || exit 1
echo "* Deploy static files"
python manage.py collectstatic --noinput || exit 1
echo "* Compress static files"
python manage.py compress || exit 1

Note: The two source code directories have slightly different Django configuration files: Their STATIC_ROOT settings point to different directories.

We now perform our unit tests on the production server:

echo "* Unit Tests"
python manage.py test app || exit 1;

And finally restart the gunicorn process.

echo "* Restart app"
sudo supervisorctl restart abiapp.$STARTING

Remember, we just restarted a process which was not visible to the Internet, we replaced the idling one of our instances. Now the time has come to reload our webserver:

echo "* Reload webserver"
sudo /usr/local/bin/abiapp-switch $STARTING

The abiapp-switch script does no more than replacing the link in /etc/nginx/sites-enabled by the other configuration file and then calling service nginx reload.

This is the moment our new code goes live. On the reload call, nginx spawns up new workers using the new configuration1. All old workers will finish their current requests and then shut down, so that there really is no measurable downtime. To have the hook complete, we restart celery (which waits for all running workers to finish their tasks, then restarts with the new code):

echo "* Restart task queue"
sudo service celeryd restart 

And finally we report success and store the name of the newly running instance.

echo "Done :-)"
echo "Instance $STARTING is running now"
echo $STARTING > /home/abiapp/run/selected

So we're done here. As you may have noticed, all early steps of the hook included an || exit 1. In case of a failed migration, unit test or compression, the whole process would just abort and leave us virtually unharmed, as the working instance keeps running.

A word on database migrations

As you may have noticed, we still have one flaw in our workflow: The database migrations are applied some time before the new code is running. The only really clean solution is to split each of your 'destructive' database migrations into multiple deployment iterations: If you for example remove a field from a model (a column from a table), you'd first push the new code with all usage of the field being removed and then, in a second push, you'd deploy the database migration which removes the column.

03. October 2014 22:00:00

12. September 2014

Mero’s Blog

The four things I miss about go

As people who know me know, my current favourite language is go. One of the best features of go is the lack of features. This is actually the reason I preferred C over most scripting languages for a long time – it does not overburden you with language-features that you first have to wrap your head around. You don't have to think for a while about what classes or modules or whatever you want to have, you just write your code down and the (more or less) entire language can easily fit inside your head. One of the best writeups of this (contrasting it with python) was done by Gustavo Niemeyer in a blogpost a few years back.

So when I say, there are a few things popping up I miss about go, this does not mean I wish them to be included. I subjectively miss them and it would definitely make me happy, if they existed. But I still very much like the go devs for prioritizing simplicity over making me happy.

So let's dig in.

  1. Generics
  2. Weak references
  3. Dynamic loading of go code
  4. Garbage-collected goroutines

Generics

So let's get this elephant out of the room first. I think this is the most named feature lacking from go. They are asked so often, they have their own entry in the go FAQ. The usual answers are anything from "maybe they will get in" to "I don't understand why people want generics, go has generic programming using interfaces". To illustrate one shortcoming of the (current) interface approach, consider writing a (simple) graph-algorithm:

type Graph [][]int

func DFS(g Graph, start int, visitor func(int)) {
    visited := make([]bool, len(g))

    var dfs func(int)
    dfs = func(i int) {
        if visited[i] {
            return
        }
        visitor(i)
        visited[i] = true
        for _, j := range g[i] {
            dfs(j)
        }
    }

    dfs(start)
}

This uses an adjacency list to represent the graph and does a recursive depth-first-search on it. Now imagine, you want to implement this algorithm generically (given, a DFS is not really hard enough to justify this, but you could just as easily have a more complex algorithm). This could be done like this:

type Node interface{}

type Graph interface {
    Neighbors(Node) []Node
}

func DFS(g Graph, start Node, visitor func(Node)) {
    visited := make(map[Node]bool)

    var dfs func(Node)
    dfs = func(n Node) {
        if visited[n] {
            return
        }
        visitor(n)
        visited[n] = true
        for _, n2 := range g.Neighbors(n) {
            dfs(n2)
        }
    }

    dfs(start)
}

This seems simple enough, but it has a lot of problems. For example, we loose type-safety: Even if we write Neighbors(Node) []Node there is no way to tell the compiler, that these instances of Node will actually always be the same. So an implementation of the graph interface would have to do type-assertions all over the place. Another problem is:

type AdjacencyList [][]int

func (l AdjacencyList) Neighbors(n Node) []Node {
    i := n.(int)
    var result []Node
    for _, j := range l[i] {
        result = append(result, j)
    }
    return result
}

An implementation of this interface as an adjacency-list actually performs pretty badly, because it can not return an []int, but must return a []Node, and even though int satisfies Node, []int is not assignable to []Node (for good reasons that lie in the implementation of interfaces, but still).

The way to solve this, is to always map your nodes to integers. This is what the standard library does in the sort-package. It is exactly the same problem. But it might not always be possible, let alone straightforward, to do this for Graphs, for example if they do not fit into memory (e.g. a web-crawler). The answer is to have the caller maintain this mapping via a map[Node]int or something similar, but… meh.

Weak references

I have to admit, that I am not sure, my use case here is really an important or even very nice one, but let's assume I want to have a database abstraction that transparently handles pointer-indirection. So let's say I have two tables T1 and T2 and T2 has a foreign key referencing T1. I think it would be pretty neat, if a database abstraction could automatically deserialize this into a pointer to a T1-value A. But to do this. we would a) need to be able to recognize A a later Put (so if the user changes A and later stores it, the database knows what row in T1 to update) and b) hand out the same pointer, if another row in T2 references the same id.

The only way I can think how to do this is to maintain a map[Id]*T1 (or similar), but this would prevent the handed out values to ever be garbage-collected. Even though there a hacks that would allow some use cases for weak references to be emulated, I don't see how they would work here.

So, as in the case of generics, this mainly means that some elegant APIs are not possible in go for library authors (and as I said, in this specific case it probably isn't a very good idea. For example you would have to think about what happens, if the user gets the same value in two different goroutines from the database).

Dynamic loading of go code

It would be useful to be able to dynamically load go code at runtime, to build plugins for go software. Specifically I want a good go replacement for jekyll because I went through some ruby-version-hell with it lately (for example jekyll serve -w still does not work for me with the version packaged in debian) and I think a statically linked go-binary would take a lot of possible pain-points out here. But plugins are a really important feature of jekyll for me, so I still want to be able to customize a page with plugins (how to avoid introducing the same version hell with this is another topic).

The currently recommended ways to do plugins are a) as go-packages and recompiling the whole binary for every change of a plugin and b) using sub-processes and net/rpc.

I don't feel a) being a good fit here, because it means maintaining a separate binary for every jekyll-site you have which just sounds like a smallish nightmare for binary distributions (plus I have use cases for plugins where even the relatively small compilation times of go would result in an intolerable increase in startup-time).

b) on the other hand results in a lot of runtime-penalty: For example I can not really pass interfaces between plugins, let alone use channels or something and every function call has to have its parameters and results serialized and deserialized. Where in the same process I can just define a transformation between different formats as a func(r io.Reader) io.Reader or something, in the RPC-context I first have to transmit the entire file over a socket, or have the plugin-author implement a net/rpc server himself and somehow pass a reference to it over the wire. This increases the burden on the plugin-authors too much, I think.

Luckily, it seems there seems to be some thought put forward recently on how to implement this, so maybe we see this in the nearish future.

Garbage-collected goroutines

Now, this is the only thing I really don't understand why it is not part of the language. Concurrency in go is a first-class citizen and garbage-collection is a feature emphasized all the time by the go-authors as an advantage. Yet, they both seem to not play entirely well together, making concurrency worse than it has to be.

Something like the standard example of how goroutines and channels work goes a little bit like this:

func Foo() {
    ch := make(chan int)
    go func() {
        i := 0
        for {
            ch <- i
            i++
        }
    }()

    for {
        fmt.Println(<-ch)
    }
}

Now, this is all well, but what if we want to exit the loop prematurely? We have to do something like this:

func Foo() {
    ch := make(chan int)
    done := make(chan bool)
    go func() {
        i := 0
        for {
            select {
                case ch <- i:
                    i++
                case <-done:
                    return
            }
        }
    }()
    for {
        i := <-ch
        if i > 1000 {
            break
        }
        fmt.Println(i)
    }
}

Because otherwise the goroutine would just stay around for all eternity, effectively being leaked memory. There are entire talks build around this and similar problems, where I don't really understand why. If we add a break to our first version, Foo returns and suddenly, all other references to ch, except the one the goroutine is blocking on writing to are gone and can be garbage-collected. The runtime can already detect if all goroutines are sleeping and we have a deadlock, the garbage-collector can accurately see what references there are to a given channel, why can we not combine the two to just see "there is absolutely no way, this channel-write can ever succeed, so let's just kill it and gc all it's memory"? This would have zero impact on existing programs (because as you can not get any references to goroutines, a deadlocked one can have no side-effect on the rest of the program), but it would make channels so much more fun to work with. It would make channels as iterators a truly elegant pattern, it would simplify pipelines and it would possibly allow a myriad other use cases for channels I can not think of right now. Heck, you could even think about (not sure if this is possible or desirable) running any deferred statements, when a goroutine is garbage-collected, so all other resources held by it will be correctly released.

This is the one thing I really wish to be added to the language. Really diving into channels and concurrency right now is very much spoiled for me because I always have to think about draining every channel, always think about what goroutine closes what channels, passing cancellation-channels…

12. September 2014 17:10:28

05. September 2014

sECuREs Webseite

Fiber7 performance

Ever since I moved to Zürich, I wanted to get a fiber internet connection. I’ve lived with a 6 Mbps DSL line at my parent’s place for about 10 years, so I was looking forward to a lot more Megabits and a lot less latency. For reasons that I won’t go into in this article, it took me about a year to get a fiber connection, and in the end I had to go with Swisscom (instead of init7 on top of EWZ).

But then fiber7 launched. They provide a 1 Gbps symmetrical connection (Swisscom provided a 1 Gbps/100 Mbps down/up connection) for a lot less money than Swisscom, and with native, static IPv6.

A couple of people are interested in how fiber7 performs, and after being connected for about 2 months, I think I can answer this question by now :-).

Latency

I started running smokeping to see how my internet connection performs back when I was with Swisscom, because they had some routing issues to certain networks. This would manifest itself with getting 50 KB/s transfer rates, which is unacceptable for image boards or any other demanding application.

So, here is the smokeping output for google.ch during the time period that covers both my Swisscom line, the temporary cablecom connection and finally fiber7:

smokeping latency to google.ch (annotated)

What you can see is that with Swisscom, I had a 6 ms ping time to google.ch. Interestingly, once I used the MikroTik RB2011 instead of the Swisscom-provided internet box, the latency improved to 5 ms.

Afterwards, latency changed twice. For the first change, I’m not sure what happened. It could be that Swisscom turned up a new, less loaded port to peer with Google. Or perhaps they configured their systems in a different way, or exchanged some hardware. The second change is relatively obvious: Swisscom enabled GGC, the Google Global Cache. GGC is a caching server provided by Google that is placed within the ISP’s own network, typically providing much better latencies (due to being placed very close to the customer) and reducing the traffic between the ISP and Google. I’m confident that Swisscom uses that because of the reverse pointer record of the IP address to which google.ch resolves to. So with that, latency is between 1 ms and 3 ms.

Because switching to Fiber7 involves recabling the physical fiber connection in the POP, there is a 2-day downtime involved. During that time I used UPC cablecom’s free offering, which is a 2 Mbps cable connection that you can use for free (as long as you pay for the cable connection itself, and after paying 50 CHF for the modem itself).

As you can see on the graph, the cable connection has a surprisingly good latency of around 8 ms to google.ch — until you start using it. Then it’s clear that 2 Mbps is not enough and the latency shoots through the roof.

The pleasant surprise is that fiber7’s ping time to google.ch is about 0.6 ms (!). They achieve such low latency with a dedicated 10 gig interconnect to Google at the Interxion in Glattbrugg.

Longer-term performance

smokeping latency measurements to google.ch over more than a week

Let me say that I’m very happy with the performance of my internet connection. Some of the measurements where packet loss is registered may be outside of fiber7’s control, or even caused by me, when recabling my equipment for example. Overall, the latency is fine and consistent, much more so than with Swisscom. I have never experienced an internet outage during the two months I’ve been with fiber7 now.

Also, while I am not continuously monitoring my bandwidth, rest assured that whenever I download something, I am able to utilize the full Gigabit, meaning I get an aggregate speed of 118 MB/s from servers that support it. Such servers are for example one-click hosters like uploaded, but also Debian mirrors (as soon as you download from multiple ones in parallel).

Conclusion

tl;dr: fiber7 delivers. Incredible latency, no outages (yet), full download speed.

by Michael Stapelberg at 05. September 2014 12:00:00

31. August 2014

sECuREs Webseite

Replicated PostgreSQL with pgpool2

I run multiple web services, mostly related to i3wm.org. All of them use PostgreSQL as their database, so the data that is stored in that PostgreSQL database is pretty important to me and the users of these services.

Since a while now, I have been thinking about storing that data in a more reliable way. Currently, it is stored on a single server, and is backed up to two different locations (one on-site, one off-site) every day. The server in question has a RAID-1 of course, but still: the setup implies that if that one server dies, the last backup may be about a day old in the worst case, and also it could take me significant time to get the services back up.

The areas in which I’d like to improve my setup are thus:

  1. Durability: In case the entire server dies, I want to have an up-to-date copy of all data.
  2. Fault tolerance: In case the entire server dies, I want to be able to quickly switch to a different server. A secondary machine should be ready to take over, albeit not fully automatically because fully automatic solutions typically are either fragile or require a higher number of servers than I’m willing to afford.

For PostgreSQL, there are various settings and additional programs that you can use which will provide you with some sort of clustering/replication. There is an overview in the PostgreSQL wiki (“Replication, Clustering, and Connection Pooling”). My solution of choice is pgpool2 because it seems robust and mature (yet under active development) to me, it is reasonably well documented and I think I roughly understand what it does under the covers.

The plan

I have two servers, located in different data centers, that I will use for this setup. The number of servers does not really matter, meaning you can easily add a third or fourth server (increasing latency with every server of course). However, the low number of servers places some restrictions on what we can do. As an example, solutions that involve global consistency based on paxos/raft quorums will not work with only two servers. As a consequence, master election is out of the question and a human will need to do the actual failover/recovery.

Each of the two servers will run PostgreSQL, but only one of them will run pgpool2 at a time. The DNS records for e.g. faq.i3wm.org will point to the server on which pgpool2 is running, so that server handles 100% of the traffic. Let’s call the server running pgpool2 the primary, and the other server the secondary. All queries that modify the database will still be sent to the secondary, but the secondary does not handle any user traffic. This could be accomplished by either not running the applications in question, or by having them connect to the pgpool2 on the primary.

When a catastrophe happens, the DNS records will be switched to point to the old-secondary server, and pgpool2 will be started there. Once the old-primary server is available again, it will become the secondary server, so that in case of another catastrophe, the same procedure can be executed again.

With a solution that involves only two servers, an often encountered problem are split-brain situations. This means both servers think they are primary, typically because there is a network partition, meaning the servers cannot talk to each other. In our case, it is important that user traffic is not handled by the secondary server. This could happen after failing over because DNS heavily relies on caching, so switching the record does not mean that suddenly all queries will go to the other server — this will only happen over time. A solution for that is to either kill pgpool2 manually if possible, or have a tool that kills pgpool2 when it cannot verify that the DNS record points to the server.

Configuration

I apologize for the overly long lines in some places, but there does not seem to be a way to use line continuations in the PostgreSQL configuration file.

Installing and configuring PostgreSQL

The following steps need to be done on each database server, whereas pgpool2 will only be installed on precisely one server.

Also note that a prerequisite for the configuration described below is that hostnames are configured properly on every involved server, i.e. hostname -f should return the fully qualified hostname of the server in question, and other servers must be able to connect to that hostname.

apt-get install postgresql postgresql-9.4-pgpool2 rsync ssh
cat >>/etc/postgresql/9.4/main/postgresql.conf <<'EOT'
listen_addresses = '*'

max_wal_senders = 1
wal_level = hot_standby
archive_mode = on
archive_command = 'test ! -f /var/lib/postgresql/9.4/main/archive_log/backup_in_progress || (test -f /var/lib/postgresql/9.4/main/archive_log/%f || cp %p /var/lib/postgresql/9.4/main/archive_log/%f)'
EOT
install -o postgres -g postgres -m 700 -d \
  /var/lib/postgresql/9.4/main/archive_log
systemctl restart postgresql.service

pgpool comes with an extension (implemented in C) that provides a couple of functions which are necessary for recovery. We need to “create” the extension in order to be able to use these functions. After running the following command, you can double-check with \dx that the extension was installed properly.

echo 'CREATE EXTENSION "pgpool_recovery"' | \
  su - postgres -c 'psql template1'

During recovery, pgpool needs to synchronize data between the PostgreSQL servers. This is done partly by running pg_basebackup on the recovery target via SSH and using rsync (which connects using SSH). Therefore, we need to create a passwordless SSH key for the postgres user. For simplicity, I am implying that you’ll copy the same id_rsa and authorized_keys files onto every database node. You’ll also need to connect to every other database server once in order to get the SSH host fingerprints into the known_hosts file.

su - postgres
ssh-keygen -f /var/lib/postgresql/.ssh/id_rsa -N ''
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
exit

We’ll also need to access remote databases with pg_basebackup non-interactively, so we need a password file:

su - postgres
echo '*:*:*:postgres:wQgvBEusf1NWDRKVXS15Fc8' > .pgpass
chmod 0600 .pgpass
exit

When pgpool recovers a node, it first makes sure the data directory is up to date, then it starts PostgreSQL and tries to connect repeatedly. Once the connection succeeded, the node is considered healthy. Therefore, we need to give the postgres user permission to control postgresql.service:

apt-get install sudo
cat >/etc/sudoers.d/pgpool-postgres <<'EOT'
postgres ALL=(ALL:ALL) NOPASSWD:/bin/systemctl start postgresql.service
postgres ALL=(ALL:ALL) NOPASSWD:/bin/systemctl stop postgresql.service
EOT

Now enable password-based authentication for all databases and replication traffic. In case your database nodes/clients don’t share a common hostname suffix, you may need to use multiple entries or replace the hostname suffix by “all”.

cat >>/etc/postgresql/9.4/main/pg_hba.conf <<'EOT'
host    all             all             .zekjur.net             md5     
host    replication     postgres        .zekjur.net             md5     
EOT

After enabling password-based authentication, we need to set a password for the postgres user which we’ll use for making the base backup:

echo "ALTER USER postgres WITH PASSWORD 'wQgvBEusf1NWDRKVXS15Fc8';" | \
  su postgres -c psql

Installing pgpool2

apt-get install pgpool2
cd /etc/pgpool2
gunzip -c /usr/share/doc/pgpool2/examples/\
pgpool.conf.sample-replication.gz > pgpool.conf

To interact with pgpool2, there are a few command-line utilities whose name starts with pcp_. In order for these to work, we must configure a username and password. For simplicity, I’ll re-use the password we set earlier for the postgres user, but you could chose to use an entirely different username/password:

echo "postgres:$(pg_md5 wQgvBEusf1NWDRKVXS15Fc8)" >> pcp.conf

In replication mode, when the client should authenticate towards the PostgreSQL database, we also need to tell pgpool2 that we are using password-based authentication:

sed -i 's/trust$/md5/g' pool_hba.conf
sed -i 's/\(enable_pool_hba =\) off/\1 on/g' pgpool.conf

Furthermore, we need to provide all the usernames and passwords that we are going to use to pgpool2:

touch pool_passwd
chown postgres.postgres pool_passwd
pg_md5 -m -u faq_i3wm_org secretpassword

For the use-case I am describing here, it is advisable to turn off load_balance_mode, otherwise queries will be sent to all healthy backends, which is slow because they are not in the same network. In addition, we’ll assign a higher weight to the backend which runs on the same machine as pgpool2, so read-only queries are sent to the local backend only.

sed -i 's/^load_balance_mode = on/load_balance_mode = off/g' \
    pgpool.conf

Now, we need to configure the backends.

sed -i 's/^\(backend_\)/# \1/g' pgpool.conf

cat >>pgpool.conf <<'EOT'
backend_hostname0 = 'midna.zekjur.net'
backend_port0 = 5432
backend_weight0 = 2
backend_data_directory0 = '/var/lib/postgresql/9.4/main'

backend_hostname1 = 'alp.zekjur.net'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/postgresql/9.4/main'
EOT

Overview: How recovery works

Let’s assume that pgpool is running on midna.zekjur.net (so midna is handling all the traffic), and alp.zekjur.net crashed. pgpool will automatically degrade alp and continue operation. When you tell it to recover alp because the machine is available again, it will do three things:

  1. (“1st stage”) SSH into alp and run pg_basebackup to get a copy of midna’s database.
  2. (“2nd stage”) Disconnect all clients so that the database on midna will not be modified anymore. Flush all data to disk on midna, then rsync the data to alp. pg_basebackup from 1st stage will have copied almost all of it, so this is a small amount of data — typically on the order of 16 MB, because that’s how big one WAL file is.
  3. Try to start PostgreSQL on alp again. pgpool will wait for 90 seconds by default, and within that time PostgreSQL must start up in such a state that pgpool can connect to it.

So, during the 1st stage, which copies the entire database, traffic will still be handled normally, only during 2nd stage and until PostgreSQL started up no queries are served.

Configuring recovery

For recovery, we need to provide pgpool2 with a couple of shell scripts that handle the details of how the recovery is performed.

sed -i 's/^\(recovery_\|client_idle_limit_in_recovery\)/# \1/g' \
    pgpool.conf

cat >>pgpool.conf <<'EOT'
recovery_user = 'postgres'
recovery_password = 'wQgvBEusf1NWDRKVXS15Fc8'

# This script is being run by invoking the pgpool_recovery() function on
# the current master(primary) postgresql server. pgpool_recovery() is
# essentially a wrapper around system(), so it runs under your database
# UNIX user (typically "postgres").
# Both scripts are located in /var/lib/postgresql/9.4/main/
recovery_1st_stage_command = '1st_stage.sh'
recovery_2nd_stage_command = '2nd_stage.sh'

# Immediately disconnect all clients when entering the 2nd stage recovery
# instead of waiting for the clients to disconnect.
client_idle_limit_in_recovery = -1
EOT

The 1st_stage.sh script logs into the backend that should be recovered and uses pg_basebackup to copy a full backup from the master(primary) backend. It also sets up the recovery.conf which will be used by PostgreSQL when starting up.

cat >/var/lib/postgresql/9.4/main/1st_stage.sh <<'EOF'
#!/bin/sh
TS=$(date +%Y-%m-%d_%H-%M-%S)
MASTER_HOST=$(hostname -f)
MASTER_DATA=$1
RECOVERY_TARGET=$2
RECOVERY_DATA=$3

# Move the PostgreSQL data directory out of our way.
ssh -T $RECOVERY_TARGET \
    "[ -d $RECOVERY_DATA ] && mv $RECOVERY_DATA $RECOVERY_DATA.$TS"

# We only use archived WAL logs during recoveries, so delete all
# logs from the last recovery to limit the growth.
rm $MASTER_DATA/archive_log/*

# With this file present, our archive_command will actually
# archive WAL files.
touch $MASTER_DATA/archive_log/backup_in_progress

# Perform a backup of the database.
ssh -T $RECOVERY_TARGET \
    "pg_basebackup -h $MASTER_HOST -D $RECOVERY_DATA --xlog"

# Configure the restore_command to use the archive_log WALs we’ll copy
# over in 2nd_stage.sh.
echo "restore_command = 'cp $RECOVERY_DATA/archive_log/%f %p'" | \
    ssh -T $RECOVERY_TARGET "cat > $RECOVERY_DATA/recovery.conf"
EOF
cat >/var/lib/postgresql/9.4/main/2nd_stage.sh <<'EOF'
#! /bin/sh
MASTER_DATA=$1
RECOVERY_TARGET=$2
RECOVERY_DATA=$3
port=5432

# Force to flush current value of sequences to xlog
psql -p $port -t -c 'SELECT datname FROM pg_database WHERE NOT datistemplate AND datallowconn' template1|
while read i
do
  if [ "$i" != "" ];then
    psql -p $port -c "SELECT setval(oid, nextval(oid)) FROM pg_class WHERE relkind = 'S'" $i
  fi
done

# Flush all transactions to disk. Since pgpool stopped all connections,
# there cannot be any data that does not reside on disk until the
# to-be-recovered host is back on line.
psql -p $port -c "SELECT pgpool_switch_xlog('$MASTER_DATA/archive_log')" template1

# Copy over all archive logs at once.
rsync -avx --delete $MASTER_DATA/archive_log/ \
    $RECOVERY_TARGET:$RECOVERY_DATA/archive_log/

# Delete the flag file to disable WAL archiving again.
rm $MASTER_DATA/archive_log/backup_in_progress
EOF
cat >/var/lib/postgresql/9.4/main/pgpool_remote_start <<'EOF'
#!/bin/sh
ssh $1 sudo systemctl start postgresql.service
EOF

chmod +x /var/lib/postgresql/9.4/main/1st_stage.sh
chmod +x /var/lib/postgresql/9.4/main/2nd_stage.sh
chmod +x /var/lib/postgresql/9.4/main/pgpool_remote_start

Now, let’s start pgpool2 and verify that it works and that we can access our first node. The pcp_node_count command should return an integer number like “2”. The psql command should be able to connect and you should see your database tables when using \d.

systemctl restart pgpool2.service
pcp_node_count 10 localhost 9898 postgres wQgvBEusf1NWDRKVXS15Fc8
psql -p 5433 -U faq_i3wm_org faq_i3wm_org

Monitoring

pgpool2 intercepts a couple of SHOW statements, so you can use the SQL command SHOW pool_nodes to see how many nodes are there:

> SHOW pool_nodes;
 node_id |     hostname     | port | status | lb_weight |  role  
---------+------------------+------+--------+-----------+--------
 0       | midna.zekjur.net | 5432 | 2      | 0.666667  | master
 1       | alp.zekjur.net   | 5432 | 2      | 0.333333  | slave
(2 rows)

You could export a cgi-script over HTTP, which just always runs this command, and then configure your monitoring software to watch for certain strings in the output. Note that you’ll also need to configure a ~/.pgpass file for the www-data user. As an example, to monitor whether alp is still a healthy backend, match for “alp.zekjur.net,5432,2” in the output of this script:

#!/bin/sh
cat <<'EOT'
Content-type: text/plain

EOT
exec echo 'SHOW pool_nodes;' | psql -t -A -F, --host localhost \
  -U faq_i3wm_org faq_i3wm_org

Performing/Debugging a recovery

In order to recover node 1 (alp in this case), use:

pcp_recovery_node 300 localhost 9898 postgres wQgvBEusf1NWDRKVXS15Fc8 1

The “300” used to be a timeout, but these days it’s only supported for backwards compatibility and has no effect.

In case the recovery fails, the only thing you’ll get back from pcp_recovery_node is the text “BackendError”, which is not very helpful. The logfile of pgpool2 contains a bit more information, but to debug recovery problems, I typically strace all PostgreSQL processes and see what the scripts are doing/where they are failing.

pgpool2 behavior during recovery

In order to see how pgpool2 performs during recovery/degradation, you can use this little Go program that tries to do three things every 0.25 seconds: check that the database is healthy (SELECT 1;), run a meaningful SELECT, run an UPDATE.

When a database node goes down, a single query may fail until pgpool2 realizes that the node needs to be degraded. If your database load is light, chances are that pgpool2 will realize the database is down without even failing a single query, though.

2014-08-13 23:15:27.638 health: ✓  select: ✓  update: ✓
2014-08-13 23:15:28.700 insert failed: driver: bad connection
2014-08-13 23:15:28.707 health: ✓  select: ✓  update: x

During recovery, there is a time when pgpool2 will just disconnect all clients and not answer any queries any more (2nd stage). In this case, the state lasted for about 20 seconds:

…
2014-08-13 23:16:01.900 health: ✓  select: ✓  update: ✓
2014-08-13 23:16:02.161 health: ✓  select: ✓  update: ✓
# no queries answered here
2014-08-13 23:16:23.625 health: ✓  select: ✓  update: ✓
2014-08-13 23:16:24.308 health: ✓  select: ✓  update: ✓
…

Conclusion

Setting up a PostgreSQL setup that involves pgpool2 is definitely a lot of work. It could be a bit easier if the documentation was more specific on the details of how recovery is supposed to work and would include the configuration that I came up with above. Ideally, something like pgpool2 would be part of PostgreSQL itself.

I am not yet sure how much software I’ll need to touch in order to make it gracefully deal with the PostgreSQL connection dying and coming back up. I know of at least one program I use (buildbot) which does not handle this situation well at all — it needs a complete restart to work again.

Time will tell if the setup is stable and easy to maintain. In case I make negative experiences, I’ll update this article :).

by Michael Stapelberg at 31. August 2014 16:00:00

26. August 2014

Raphael Michel

SSL-enabled web servers on Android devices

Scenario

I am currently working on a software for a client, in which there are multiple Android devices involved. One of them works as a server while the other devices work as clients, fetching and pushing data from and to the server to keep all the data in sync. The software is supposed to be deployed on trusted devices in a trusted wireless network specifically set up for this application. Nevertheless, experience tells that in reality, it probably will be used in wireless networks which are not isolated.

The software is to be used at event locations and may not require internet connection at any time. It should work with any set of Android (4.0+) devices with having the app installed being the only prerequisite, wheras the same app should work as both client and server. The software should assume that the connection between the devices might be broken at any time and configuration should be simple for non-technical people.

As I'm pretty sure I'm not the only one having this requirements for an Android app project, I'm gonna talk a bit about how I've done it.

Implementation idea

The protocol used between the Android devices is HTTP. As I must assume that the app is being used in unisolated WiFi networks, the communication has to be SSL encrypted.

It is quite easy to run a Jetty server inside an Android app1, and it is also possible to use SSL encryption with Jetty. However, all documentation and examples on this I managed to find, were suggesting creating a SSL cert with keytool on your computer, storing it in a BKS keystore and shipping it with your application, or, having your users do this and letting them specify a path on the SD card to the keystore.

Neither of those is a real option for me: I cannot assume any of my users ever will create his own certificate with keytool and I also cannot ship a hard-coded private key with my application, as the app itself might not be considered a secret and using SSL with known keys is not even slightly better than not using encryption at all. Therefore, I must generate the SSL key and certificate on each device seperately on the first use of the app. I will present my code for this in the next section.

After the certificate is generated, the clients need to know the certificate's fingerprint: If the client would just accept any certificate, I could have avoided all the key generation work above, because a client accepting all certificates is nearly as bad as shipping the same keys on every device. As the system has to work offline and ad-hoc, there is no way to use something like a CA infrastructure.

Luckily, there is an elegant way to solve both the certificate and the configuration problem at once: The server device shows a QR code containing ip address, port and SSL fingerprint of the server as well as an authentication token (being in a public network, we want both encryption and authentication). The client just has to scan this QR code and gets all informaction necessary for etablishing a secure connection.

Implementation details

Dependencies

This effort has introduced a bunch of new dependencies to my application

  • My webserver component is built on Jetty 8.1.15, of which I created my own jar bundle (using jar xf and jar cf) containing:
    • jetty-continuation-8.1.15.v20140411.jar
    • jetty-http-8.1.15.v20140411.jar
    • jetty-io-8.1.15.v20140411.jar
    • jetty-security-8.1.15.v20140411.jar
    • jetty-server-8.1.15.v20140411.jar
    • jetty-servlet-8.1.15.v20140411.jar
    • jetty-util-8.1.15.v20140411.jar
    • jetty-webapp-8.1.15.v20140411.jar
    • jetty-xml-8.1.15.v20140411.jar
    • servlet-api-3.0.jar
  • Bouncycastle's bcprov-jdk15on-146.jar for keystore handling
  • Apache Commons Codec for fingerprinting keys
  • ZXing's core.jar for QR code generation or David Lazaro's wonderful QRCodeReaderView for easy QR code scanning (already includes core.jar)

Key generation

The hardest part was generating the keypair and certificate. I've got2 some3 inspiration4 from the web, but as I did not find an example ready to work on Android, here's mine:

/**
 * Creates a new SSL key and certificate and stores them in the app's
 * internal data directory.
 * 
 * @param ctx
 *            An Android application context
 * @param keystorePassword
 *            The password to be used for the keystore
 * @return boolean indicating success or failure
 */
public static boolean genSSLKey(Context ctx, String keystorePassword) {
    try {
        // Create a new pair of RSA keys using BouncyCastle classes
        RSAKeyPairGenerator gen = new RSAKeyPairGenerator();
        gen.init(new RSAKeyGenerationParameters(BigInteger.valueOf(3),
                new SecureRandom(), 1024, 80));
        AsymmetricCipherKeyPair keypair = gen.generateKeyPair();
        RSAKeyParameters publicKey = (RSAKeyParameters) keypair.getPublic();
        RSAPrivateCrtKeyParameters privateKey = (RSAPrivateCrtKeyParameters) keypair
                .getPrivate();

        // We also need our pair of keys in another format, so we'll convert
        // them using java.security classes
        PublicKey pubKey = KeyFactory.getInstance("RSA").generatePublic(
                new RSAPublicKeySpec(publicKey.getModulus(), publicKey
                        .getExponent()));
        PrivateKey privKey = KeyFactory.getInstance("RSA").generatePrivate(
                new RSAPrivateCrtKeySpec(publicKey.getModulus(), publicKey
                        .getExponent(), privateKey.getExponent(),
                        privateKey.getP(), privateKey.getQ(), privateKey
                                .getDP(), privateKey.getDQ(), privateKey
                                .getQInv()));

        // CName or other certificate details do not really matter here
        X509Name x509Name = new X509Name("CN=" + CNAME);

        // We have to sign our public key now. As we do not need or have
        // some kind of CA infrastructure, we are using our new keys
        // to sign themselves

        // Set certificate meta information
        V3TBSCertificateGenerator certGen = new V3TBSCertificateGenerator();
        certGen.setSerialNumber(new DERInteger(BigInteger.valueOf(System
                .currentTimeMillis())));
        certGen.setIssuer(new X509Name("CN=" + CNAME));
        certGen.setSubject(x509Name);
        DERObjectIdentifier sigOID = PKCSObjectIdentifiers.sha1WithRSAEncryption;
        AlgorithmIdentifier sigAlgId = new AlgorithmIdentifier(sigOID,
                new DERNull());
        certGen.setSignature(sigAlgId);
        ByteArrayInputStream bai = new ByteArrayInputStream(
                pubKey.getEncoded());
        ASN1InputStream ais = new ASN1InputStream(bai);
        certGen.setSubjectPublicKeyInfo(new SubjectPublicKeyInfo(
                (ASN1Sequence) ais.readObject()));
        bai.close();
        ais.close();

        // We want our keys to live long
        Calendar expiry = Calendar.getInstance();
        expiry.add(Calendar.DAY_OF_YEAR, 365 * 30);

        certGen.setStartDate(new Time(new Date(System.currentTimeMillis())));
        certGen.setEndDate(new Time(expiry.getTime()));
        TBSCertificateStructure tbsCert = certGen.generateTBSCertificate();

        // The signing: We first build a hash of our certificate, than sign
        // it with our private key
        SHA1Digest digester = new SHA1Digest();
        AsymmetricBlockCipher rsa = new PKCS1Encoding(new RSAEngine());
        ByteArrayOutputStream bOut = new ByteArrayOutputStream();
        DEROutputStream dOut = new DEROutputStream(bOut);
        dOut.writeObject(tbsCert);
        byte[] signature;
        byte[] certBlock = bOut.toByteArray();
        // first create digest
        digester.update(certBlock, 0, certBlock.length);
        byte[] hash = new byte[digester.getDigestSize()];
        digester.doFinal(hash, 0);
        // and sign that
        rsa.init(true, privateKey);
        DigestInfo dInfo = new DigestInfo(new AlgorithmIdentifier(
                X509ObjectIdentifiers.id_SHA1, null), hash);
        byte[] digest = dInfo.getEncoded(ASN1Encodable.DER);
        signature = rsa.processBlock(digest, 0, digest.length);
        dOut.close();
        
        // We build a certificate chain containing only one certificate
        ASN1EncodableVector v = new ASN1EncodableVector();
        v.add(tbsCert);
        v.add(sigAlgId);
        v.add(new DERBitString(signature));
        X509CertificateObject clientCert = new X509CertificateObject(
                new X509CertificateStructure(new DERSequence(v)));
        X509Certificate[] chain = new X509Certificate[1];
        chain[0] = clientCert;

        // We add our certificate to a new keystore
        KeyStore keyStore = KeyStore.getInstance("BKS");
        keyStore.load(null);
        keyStore.setKeyEntry(KEY_ALIAS, (Key) privKey,
                keystorePassword.toCharArray(), chain);
        
        // We write this keystore to a file
        OutputStream out = ctx.openFileOutput(FILE_NAME,
                Context.MODE_PRIVATE);
        keyStore.store(out, keystorePassword.toCharArray());
        out.close();
        return true;
    } catch (Exception e) {
        // Do your exception handling here
        // There is a lot which might go wrong
        e.printStackTrace();
    }
    return false;
}

The key generation takes roughly fifteen seconds on my Motorola Moto G, so it is strongly discouraged to do this in the UI thread – do it in your Service (you should have one for your server!) or in an AsyncTask.

FILE_NAME is the name of your key store (I use keystore.bks) and KEY_ALIAS the alias of the new key inside the keystore (I use ssl).

Jetty initialization

In the initialization code of our jetty servlet, we have to load our newly created keystore into a SslContextFactory, which is quite easy:

SslContextFactory sslContextFactory = new SslContextFactory();
InputStream in = openFileInput(SSLUtils.FILE_NAME);
KeyStore keyStore = KeyStore.getInstance("BKS");
try {
    keyStore.load(in, KEYSTORE_PASSWORD.toCharArray());
} finally {
    in.close();
}
sslContextFactory.setKeyStore(keyStore);
sslContextFactory.setKeyStorePassword(KEYSTORE_PASSWORD);
sslContextFactory.setKeyManagerPassword(KEYSTORE_PASSWORD);
sslContextFactory.setCertAlias(SSLUtils.KEY_ALIAS);
sslContextFactory.setKeyStoreType("bks");
// We do not want to speak old SSL and we only want to use strong ciphers
// In the original version of this blog post, I only set
// sslContextFactory.setIncludeProtocols("TLS");
// which does not seem to work with current Android/jetty/whatever versions
sslContextFactory.setIncludeProtocols("TLSv1", "TLSv1.1", "TLSv1.2");
sslContextFactory.setExcludeProtocols("SSLv3");
sslContextFactory.setIncludeCipherSuites("TLS_DHE_RSA_WITH_AES_128_CBC_SHA");

Server server = new Server();
SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(
        sslContextFactory);
sslConnector.setPort(PORT);
server.addConnector(sslConnector);

// As before:
server.setHandler(handler); // where handler is an ``AbstractHandler`` instance
server.start();

QR Code generation

In order to display the QR code, we first need to create a SHA1 hash of our certificate:

public static String getSHA1Hash(Context ctx, String keystorePassword) {
    InputStream in = null;
    KeyStore keyStore;
    try {
        in = ctx.openFileInput(FILE_NAME);
        keyStore = KeyStore.getInstance("BKS");
        keyStore.load(in, keystorePassword.toCharArray());
        return new String(Hex.encodeHex(DigestUtils.sha1(keyStore
                .getCertificate(KEY_ALIAS).getEncoded())));
    } catch (Exception e) {
        // Should not go wrong on standard Android devices
        // except possible IO errors on reading the keystore file
        e.printStackTrace();
    } finally {
        try {
            if (in != null)
                in.close();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
    return null;
}

Using this method, we can generate a QR code containing ip, port and certificate information and draw it onto an ImageView:

protected void genQrCode(ImageView view) {
    QRCodeWriter writer = new QRCodeWriter();
    try {
        WifiManager wifiManager = (WifiManager) getActivity()
                .getSystemService(WIFI_SERVICE);
        int ipAddress = wifiManager.getConnectionInfo().getIpAddress();
        final String formatedIpAddress = String.format(Locale.GERMAN,
                "%d.%d.%d.%d", (ipAddress & 0xff),
                (ipAddress >> 8 & 0xff), (ipAddress >> 16 & 0xff),
                (ipAddress >> 24 & 0xff));

        JSONObject qrdata = new JSONObject();
        qrdata.put("server", formatedIpAddress);
        qrdata.put("port", ServerService.PORT);
        qrdata.put("cert", getSHA1Hash(getActivity(), KEYSTORE_PASSWORD));
        qrdata.put("secret", SECRET); // for authentitication. Generate yourself ;)

        BitMatrix bitMatrix = writer.encode(qrdata.toString(),
                BarcodeFormat.QR_CODE, 500, 500);
        int width = bitMatrix.getWidth();
        int height = bitMatrix.getHeight();
        Bitmap bmp = Bitmap.createBitmap(width, height,
                Bitmap.Config.RGB_565);
        for (int x = 0; x < width; x++) {
            for (int y = 0; y < height; y++) {
                bmp.setPixel(x, y, bitMatrix.get(x, y) ? Color.BLACK
                        : Color.WHITE);
            }
        }
        view.setImageBitmap(bmp);

    } catch (WriterException e) {
        e.printStackTrace();
    } catch (JSONException e) {
        e.printStackTrace();
    }
}

Client side

On the client side, HttpsURLConnection is being used for the connection. This works roughly like this:

// My application saves server address, port and certificate
// in a SharedPreferences store
SharedPreferences sp = getSharedPreferences("server",
        Context.MODE_PRIVATE);

X509PinningTrustManager trustManager = new X509PinningTrustManager(
        sp.getString("cert", ""));

SSLContext sc = null;
DataSource data = getDataSource();
HttpsURLConnection urlConnection = null;
try {
    data.open();

    URL url = new URL("https://" + sp.getString("server", "") + ":"
            + sp.getInt("port", ServerService.PORT) + "/path");
    urlConnection = (HttpsURLConnection) url.openConnection();

    // Set our SSL settings settings
    urlConnection
            .setHostnameVerifier(trustManager.new HostnameVerifier());
    try {
        sc = SSLContext.getInstance("TLS");
        sc.init(null, new TrustManager[] { trustManager },
                new java.security.SecureRandom());
        urlConnection.setSSLSocketFactory(sc.getSocketFactory());
    } catch (NoSuchAlgorithmException e1) {
        // Should not happen...
        e1.printStackTrace();
    } catch (KeyManagementException e1) {
        // Should not happen...
        e1.printStackTrace();
    }

    // do HTTP POST or authentication stuff here...
    InputStream in = new BufferedInputStream(
            urlConnection.getInputStream());
    // process the response...
} catch (javax.net.ssl.SSLHandshakeException e) {
    // We got the wrong certificate
    // (or handshake was interrupted, we can't tell here)
    e.printStackTrace();
} catch (IOException e) {
    // other IO errors
    e.printStackTrace();
} finally {
    if (urlConnection != null)
        urlConnection.disconnect();
    try {
        data.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

If you /only/ connect to this server in your whole application, you can use the HttpsUrlConnection.setDefault* methods instead of specifying it with every request. With writing a clever HostnameVerifier and TrustManager you could also achieve that the pinning is only enforced for your server but the system defaults are used for other servers.

The above code example makes use of a TrustManager class I implemented myself. It only accepts exactly one certificate:

/**
 * This class provides an X509 Trust Manager trusting only one certificate.
 */
public class X509PinningTrustManager implements X509TrustManager {

    String pinned = null;

    /**
    * Creates the Trust Manager.
    * 
    * @param pinnedFingerprint
    *            The certificate to be pinned. Expecting a SHA1 fingerprint in
    *            lowercase without colons.
    */
    public X509PinningTrustManager(String pinnedFingerprint) {
        pinned = pinnedFingerprint;
    }

    public java.security.cert.X509Certificate[] getAcceptedIssuers() {
        return new X509Certificate[] {};
    }

    public void checkClientTrusted(X509Certificate[] certs, String authType)
            throws CertificateException {
        checkServerTrusted(certs, authType);
    }

    public void checkServerTrusted(X509Certificate[] certs, String authType)
            throws CertificateException {
        for (X509Certificate cert : certs) {
            try {
                String fingerprint = new String(Hex.encodeHex(DigestUtils
                        .sha1(cert.getEncoded())));
                if (pinned.equals(fingerprint))
                    return;
            } catch (CertificateEncodingException e) {
                e.printStackTrace();
            }
        }
        throw new CertificateException("Certificate did not match, pinned to "
                + pinned);
    }

    /**
    * This hostname verifier does not verify hostnames. This is not necessary,
    * though, as we only accept one single certificate.
    *
    */
    public class HostnameVerifier implements javax.net.ssl.HostnameVerifier {
        public boolean verify(String hostname, SSLSession session) {
            return true;
        }
    }
}

Security considerations

This approach should be very secure against most attackers, as all network traffic is encrypted and traditional man-in-the-middle is not possible, as the client exactly know which certificate it expects and does not accept others. We even have Perfect Forward Secrecy! The fact that this information is not being transferred via network but via a QR code adds additional security.

However, there are some security problems left:

  • Most important: We only have TLS 1.0 available. I did not find a possibility to enable TLS 1.2. This is sad. I suspect the reason is that Android is still based on Java 6 and TLS 1.2 was introduced with Java 7. There is a possibility of running Java 7 code starting with Android KitKat, but this did not help in my quick test.
  • The keystore password is hardcoded. The only other option would be to prompt the user for the password on every application startup, which is not desirable. This, however, is only important if someone gains access to the keystore file, which is only readable for the app itself and root. And if our potential attacker is root on your phone, I guess you've got bigger problems than this keystore… Remember, my application is supposed to run on phones dedicated to run this application (with not many 3rd-party applications introducing vulnerabilities being installed).
  • SecureRandom is not cryptographically secure in Android 4.3 or older, as Android Lint points out. This official blog post5 has some more details and a workaround, but if you care about security, you should not run an old operating system anyway ;)

26. August 2014 22:00:00

24. August 2014

Moredreads Blog

Atsutane's blog

FrOSCon 2014 Summary

This year’s FrOSCon’s over, we had an unused booth (later used by someone for presenting emacs) and our own room, including some talks and a workshop. The talks were surprisingly well visited, we did not expect so many people to come. The original expectation was that we might be 15 people, but we got a larger room, probably a good decision by the organization team.

There are some things we can and should do better with future events, this years organization was – as always – really chaotic.

There are few things we did well and others we learned that we handled them… well bad is the proper term.

  • We need to give talks on each day we get a room. This year we only gave these on the first day and the second day the room was quiet. (Positive effect is the reogarnization of the SSL certificates.)

  • We have to give the talks in a better prepared way and not come up with the topics the weekend before the event and create the slides last-minute.

  • We used an Etherpad for organization of a todo list this was from my point of view well handled.

  • When we want to get a room skip a booth. We did not use the booth as we had no hardware to present something down there.

  • Merchandising, it’s every year the same, users want to get some merchandising. If I saw it correctly Debian sells T-Shirts, with german law we probably can’t do something like that to get a bit profit invested into our servers. To give away free stickers to actual users (not arrogant Fedora folks) might be an option, yet we would have to get them on our own costs. Another option would be to talk with the people from the merchandising booth at the entry.

  • When we give workshops we need a better organization, especially with installation stuff. I haven’t done an installation with the pacstrap script yet, leaving me in the same situation as someone new: How the heck do I initialize an installation.

24. August 2014 17:34:38