GIMP onion layers plug-in

21.02.2017 20:48

Some time ago I was playing with animation making applications on a (non-pro) iPad. I found the whole ecosystem very closed and I had to jump through some hoops to get my drawings back onto a Linux computer. However the fact that you can draw directly on the screen does make some things easier compared to a standalone Wacom tablet, even if the accuracy is significantly worse.

One other thing in particular stood out compared to my old GIMP setup. These applications make it very easy to jump frame by frame through the animation. In one touch you can display the next frame and do some quick edits and then move back with another touch. You can browse up and down the stack as a quick way to preview the animation. They also do something they call onion layering which simply means that they overlay the next and previous frames with reduced opacity so that it's easier to see how things are moving around.

This is all obviously useful. I was doing similar things in GIMP, except that changing frames there took some more effort. GIMP as such doesn't have a concept of frames. Instead you use image layers (or layer groups) as frames. You have to click to select a layer and then a few more clicks to adjust the visibility and opacity for neighboring layers if you want to have the onion layer effect. This quickly amounts to a lot of clicking around if you work on more than a handful of frames.

GIMP does offer a Python plug-in interface however, so automating quick frame jumps is relatively simple. Relatively, because GIMP Python Documentation turns out to be somewhat rudimentary if you're not already familiar with GIMP internals. I found it best to learn from the Python-Fu samples and explore the interface using the built-in interactive console.

Screenshot of the GIMP onion layers plug-in

The end result of this exercise was the GIMP onion layers plug-in, which you can now find on GitHub together with installation and usage instructions. The plug-in doesn't have much in terms of an user interface - it merely registers a handful of python-fu-onion- actions for stepping to previous or next frame, with or without the onion layer effect. The idea is that you then assign keyboard (or tablet button) shortcuts to these actions. You will have to define the shortcuts yourself though, since the plug-in can't define them for you. I like to use dot and comma keys since they don't conflict with other GIMP shortcuts and match the typical frame step buttons on video players.

If you follow the layer structure suggested by the Export layers plug-in, this all works quite nicely, including handling of background layers. The only real problem I encountered was the fact that the layer visibility and opacity operations clutter the undo history. Unfortunately, that seems to be the limitation of the plug-in API. Other plug-ins work around this by doing operations on a duplicate of the image, but obviously I can't do that here.

I should note that I was using GIMP 2.8.14 from Debian Jessie, so the code might be somewhat outdated compared to latest GIMP 2.8.20. Feedback in that regard is welcome, as always.

Posted by Tomaž | Categories: Code | Comments »

Python applications in a .zip

08.02.2017 10:20

Poking around youtube-dl I found this interesting recipe on how to package a self-contained Python application. Youtube-dl ships as a single executable file you can run immediately, or put somewhere into your PATH. This makes it very convenient to use even when you don't want to do the usual pip install dance. Of course, it comes at the cost of not resolving any dependencies for you.

I was expecting the file to be a huge, monolithic Python source file, but in fact it's a ZIP with a prepended hash-bang and nicely structured Python package inside. Simplified a bit, here is the gist of the Makefile part that builds it:

	cd src && zip ../hello hello/*.py
	echo '#!/usr/bin/python' > hello
	cat >> hello
	chmod a+x hello

Now, if src/ contains:

import hello

And src/hello/ contains:

def greet():
	print("Hello, World!")

Building the executable and running hello from the command-line should result in the standard greeting:

$ make
cd src && zip ../hello hello/*.py
  adding: hello/ (stored 0%)
  adding: (stored 0%)
echo '#!/usr/bin/python' > hello
cat >> hello
chmod a+x hello
$ ./hello
Hello, World!

How does this work? Apparently it's quite an old trick with some added refinement. Already since version 2.3 Python knows how to import modules directly from ZIP files in the same way as from the usual directories. Python also allows executing modules from the command-line.

It sounds very much like Java JARs, doesn't it? The only missing part is the #!... line that makes the Linux kernel use the Python interpreter when executing the file. Since ZIP format ignores any junk that precedes the compressed data, the line can simply be prepended as if the whole file was a simple Bash script.

Posted by Tomaž | Categories: Code | Comments »

Moving Dovecot indexes and control files

23.12.2016 22:06

Dovecot IMAP server can use a standard Maildir for storage of messages inside user's home directories. The default in that case is to store search indexes and control files in the same directory structure, alongside mail files. That can be convenient, since no special setup is needed and everything is stored in the same place.

However, this doesn't work very well if you have disk quotas enabled on the filesystem that stores Maildirs. In case a user reaches their quota, Dovecot will not be able to write to its own files, which can lead to problems. Hence, documentation recommends that you configure a separate location for Dovecot's files in that case. This is done with INDEX and CONTROL options to the mail_location specification.

For example, after setting up appropriate directory structure and permissions under /var/lib/dovecot:

mail_location = maildir:%h/Maildir:INDEX=/var/lib/dovecot/index/%u:CONTROL=/var/lib/dovecot/control/%u

You can just set this up and leave old index and control files in place. In that case, Dovecot will automatically regenerate them. However, this is not ideal. It can take significant time to regenerate indexes if you have a lot of mail. You also lose some IMAP-related metadata, like message flags and unique IDs, which will confuse IMAP clients. It would be better to move existing files to the new location, however the documentation doesn't say how to do that.

I found that the following script works with Dovecot 2.2.13 on Debian Jessie. As always, be careful when dealing with other people's mail and double check that the script does what you want. I had my share of problems when coming up with this. Make backups.


set -ue

# Run as " user path-to-users-maildir" for each user.

# Make sure that Dovecot isn't running or that this specific IMAP user isn't
# connected (and can't connect) while this script runs!


# Correct after double-checking that this script does what you want.
MV="echo mv -i"


# Index files like dovecot.index, dovecot.index.cache, etc. go under the 
# INDEX directory. The directory structure should be preserved. For example,
# ~/Maildir/.Foo/dovecot.index should go to index/.Foo/dovecot.index.

# Exception are index files in the root of Maildir. Those should go under 
mkdir -p "$b"
$MV *index* "$b"

find . -name "*index*"|while read a; do
	b=$DOVECOTDIR/index/$USERNAME/`dirname "$a"`
	mkdir -p "$b"
	$MV "$a" "$b"

# dovecot-uidlist and dovecot-keywords files should go under CONTROL, in a
# similar way to indexes. There is the same exception for .INBOX.
mkdir -p "$b"
$MV dovecot-uidlist dovecot-keywords "$b"

find . -name "*dovecot*"|while read a; do
	b=$DOVECOTDIR/control/$USERNAME/`dirname "$a"`
	mkdir -p "$b"
	$MV "$a" "$b"

# subscriptions file should go to the root of the control directory.

# Note that commands above also move some dovecot-* files into the root of
# the control directory. This seems to be fine.
$MV "subscriptions" "$DOVECOTDIR/control/$USERNAME"
Posted by Tomaž | Categories: Code | Comments »

Blacklisting users for inbound mail in Exim

18.09.2016 12:00

You can prevent existing local users from receiving mail by redirecting them to :fail: in /etc/aliases. For example, to make SMTP delivery to list@... fail with 550 Unrouteable address:

list: :fail:Unrouteable address

See special items section in the Redirect router documentation.

By default, Exim in Debian will attempt to deliver mail for all user accounts, even non-human system users. System users (like list above) typically don't have a traditional home directory set in /etc/passwd. This means that mail for them will get stuck in queue as Exim tries and fails to write to their mailbox. Because spam also gets sent to such addresses, mail queue will grow and various things will start to complain. Traditionally, mail for system accounts is redirected to root in /etc/aliases, but some accounts just receive a ton of spam and it's better to simply reject mail sent to them.

Another thing worth pointing out is the Handling incoming mail for local accounts with low UID section in README.Debian.gz in case you want to reject mail sent to all system accounts.

This took way too much time to figure out. There's a ton of guides on how to blacklist individual users for outgoing mail, but none I could find for cases like this. I was half-way into writing a custom router before I stumbled upon this feature.

Posted by Tomaž | Categories: Code | Comments »

Script for setting up multiple terminals.

16.09.2016 19:04

Sometimes I'm working on software that requires running a lot of different inter-dependent processes (are microservices still a thing?). Using systemd or some other init system for starting up such systems is fine for production. While debugging something on my laptop however it's useful to have each process running in its own X terminal. This allows me to inspect any debug output and to occasionally restart something. I used to have scripts that would run commands in individual GNU screen sessions, but that had a number of annoying problems.

I've recently came up with the following:


set -ue

if [ "$#" -ne 1 ]; then
	echo "USAGE: $0 path_to_file"
	echo "File contains one command per line to be started in a terminal window."
	exit 1

cat "$1" | while read CMD; do
	if [ -z "$CMD" -o "${CMD:0:1}" = "#" ]; then


	cat > "$RCFILE" <<END
source ~/.bashrc
history -s $CMD
echo $CMD

	gnome-terminal -e "/bin/bash --rcfile \"$RCFILE\""
	rm "$RCFILE"

This script reads a file that contains one command per line. Empty lines and lines starting with a hash sign are ignored. For each line it opens a new gnome-terminal (adjust as needed - most terminal emulators support the -e argument) and runs the command in a way that:

  • The terminal doesn't immediately close after the command exits. Instead it drops back to bash. This allows you to inspect any output that got printed right before the process died.
  • The command is printed on top of the terminal before the command runs. This allows you to identify the terminal running a particular process in case that is not obvious from the command output. For some reason, gnome-terminal's --title doesn't work.
  • The command is appended on top of bash' history list. This allows you to easily restart the process that died (or you killed with Ctrl-C) by simply pressing the up cursor and enter keys.
Posted by Tomaž | Categories: Code | Comments »

The NumPy min and max trap

30.08.2016 20:21

There's an interesting trap that I managed to fall into a few times when doing calculations with Python. NumPy provides several functions with the same name as functions built into Python. These replacements typically provide a better integration with array types. Among them are the min() and max(). In vast majority of cases, NumPy versions are a drop-in replacement for built-ins. In a few, however, they can cause some very hard-to-spot bugs. Consider the following:

import numpy as np

print(max(-1, 0))
print(np.max(-1, 0))

This prints (at least in NumPy 1.11.1 and earlier):


Where is the catch? The built-in max() can be used in two distinct ways: you can either pass it an iterable as the single argument (in which case the largest element of the iterable will be returned), or you can pass multiple arguments (in which case the largest argument will be returned). In NumPy, max() is an alias for amax() and that only supports the former convention. The second argument in the example above is interpreted as array axis along which to perform the maximum. It appears that NumPy thinks axis zero is a reasonable choice for a zero-dimensional input and doesn't complain.

Yes, recent versions of NumPy will complain if you have anything else than 0 or -1 in the axis argument. Having max(x, 0) in code is not that unusual though. I use it a lot as a shorthand when I need to clip negative values to 0. When moving code around between scripts that use NumPy, those that don't and IPython Notebooks (which do "from numpy import *" by default), its easy to mess things up.

I guess both sides are to blame here. I find that flexible functions that interpret arguments in multiple ways are usually bad practice and I try to leave them out of interfaces I design. Yes, they are convenient, but they also often lead to bugs. On the other hand, I would also expect NumPy to complain about the nonsensical axis argument. Axis -1 makes sense for a zero-dimensional input, axis 0 doesn't. The alias from max() to amax() is dangerous (and as far as I can see undocumented). A possible way to prevent such mistakes would be to support only the named version of the axis argument.

Posted by Tomaž | Categories: Code | Comments »

Linux loader for DOS-like .com files

25.08.2016 18:09

Back in the olden days of DOS, there was a thing called a .com executable. They were obsolete even then, being basically inherited from CP/M and replaced by DOS MZ executables. Compared to modern binary formats like ELF, .com files were exceedingly simple. There was no file header, no metadata, no division between code and data sections. The operating system would load the entire file into a 16-bit memory segment at offset 0x100, set the stack and the program segment prefix and jump to the first instruction. It was more of a convention than a file format.

While this simplicity created many limitations, it also gave rise to many fun little hacks. You could write a binary executable directly in a text editor. The absence of headers meant that you could make extremely small programs. A minimal executable in Linux these days is somewhere in the vicinity of 10 kB. With some clever hacking you might get it down to a hundred bytes or so. A small .com executable can be on the order of bytes.

A comment on Hacker News recently caught my attention. One could write a .com loader for Linux pretty easily. How easily would that be? I had to try it out.

/* Map address space from 0x00000 to 0x10000. */

void *p = mmap(	(void*)0x00000, 0x10000, 
		-1, 0);

/* Load file at address 0x100. */

(a boring loop here reading from a file to memory.)

/* Set stack pointer to end of allocated area, jump to 0x100. */

	"mov    $0x10000, %rsp\n"
	"jmp    0x100\n"

This is the gist of it for Linux on x86_64. First we have to map 64 kB of memory at the bottom of our virtual address space. This is where NULL pointers and other such beasts roam, so it's unallocated by default on Linux. In fact, as a security feature the kernel will actively prevent such calls to mmap(). We have to disable this protection first with:

$ sysctl vm.mmap_min_addr=0

Memory mappings have to be aligned with page boundaries, so we can't simply map the file at address 0x100. To work around this we use an anonymous map starting at address 0 and then fill it in manually with the contents of the .com file at the correct offset. Since we have the whole 64 bit linear address space to play with, choosing 0x100 is a bit silly (code section usually lives somewhere around 0x400000 on Linux), but then again, so is this entire exercise.

Once we have everything set up, we set the stack pointer to the top of our 64 kB and jump to the code using some in-line assembly. If we were being pedantic, we could set up the PSP as well. Most of it doesn't make much sense on Linux though (except for command-line parameters maybe). I didn't bother.

Now that we have the loader ready, how do we create a .com binary that will work with it? We have to turn to assembly:

    call   greet
    mov    rax, 60
    mov    rdi, 0

# A function call, just to show off our working stack.
    mov     rax, 1
    mov     rdi, 1
    mov     rsi, msg
    mov     rdx, 14

    msg db      "Hello, World!", 10

This will compile nicely into an x86_64 ELF object file with NASM. We then have to link it into an ELF executable using a custom linker script that tells the linker that all sections will be placed in a chunk of memory starting at 0x100:

	RAM (rwx) : ORIGIN = 0x0100, LENGTH = 0xff00

Linker will create an executable which contains our code with all the proper offsets, but still has all the ELF cruft around it (it will segfault nicely if you try to run it with kernel's default ELF loader). As the final step, we must dump its contents into a bare binary file using objcopy:

$ objcopy -S --output-target=binary hello.elf

Finally, we can run our .com file with our loader:

$ ./loader
Hello, World!

As an extra convenience, we can register our new loader with the kernel, so that it will be invoked each time you try to execute a file with the .com extension (update-binfmts is part of the binfmt-support package on Debian):

$ update-binfmts --install com /path/to/loader --extension com
$ ./
Hello, World!

And there you have it, a nice 59 byte "Hello, World" binary for Linux. If you want to play with it yourself, see the complete example on GitHub that has a Makefile for your convenience. If you make something small and fun with it, please drop me a note.

One more thing. In case it's not clear at this point, this loader will not work with DOS executables (or CP/M in that case). Those expect to be run in 16-bit real mode and rely on DOS services. Needless to say, my loader makes no attempts to provide those. Code will run in the same environment as other Linux user space processes (albeit at a weird address) and must use the usual kernel syscalls. If you want to run old DOS stuff, use DOSBox or something similar.

Posted by Tomaž | Categories: Code | Comments »

Recent free software work

30.07.2016 9:25

I've done a bit of open source janitorial work recently. Here is a short recap.

jsonmerge is a Python module for merging a series of JSON documents using arbitrarily complicated rules inspired by JSON schema. I have developed the gist of it with Sarah Bird during EuroPython 2014. Since then I've been maintaining it, but not really doing any further development. 1.2.1 release fixes a bug in internal handling of JSON references, reported by chseeling. The bug caused a RefResolutionError to be raised when merging properties with slash or tilde characters in them.

I believe jsonmerge is usable in its current form. The only larger problem that I know of is the fact that automatic schema generation for merged documents would need to be rethought and probably refactored. This would address incompatibility with jsonschema 2.5.0 and improve handling of some edge cases. get_schema() seems to be rarely used however. I don't have any plans to work on this issue at the moment as I'm not using jsonmerge myself. I would be happy to look into any pull requests or work on it myself if anyone would offer a suitable bounty.

aspell-sl is the Slovenian dictionary for GNU Aspell. Its Debian package was recently orphaned. As far as I know, this is currently the only Slovenian dictionary included in Debian. I took over as the maintainer of the Debian package and fixed several long-standing packaging bugs to prevent it from disappearing from next Debian stable release. I haven't updated the dictionary however. The word list, while usable, remains as it was since the last update somewhere in 2002.

The situation with this dictionary seems complicated. The original word list appears to have been prepared in 2001 or 2002 by a diverse group of people from JSI, LUGOS, University of Ljubljana and private companies. I'm guessing they were funded by the now-defunct Ministry of Information Society which was financing localization of open source projects around that time. The upstream web page is long gone. In fact, aspell itself doesn't seem to be doing that well, although I'm still a regular user. The only free and up-to-date Slovenian dictionary I've found on-line was in the Slovenian Dictionary Pack for LibreOffice. It seems the word list from there would require relatively little work to be adapted for GNU Aspell (Hunspell dictionaries use very similar syntax). However, the upstream source of data in the pack is unclear to me and I hesitate to mess too much with things I know very little about.

z80dasm is a disassembler for the Zilog Z80 microprocessor. I forked the dz80 project by Jan Panteltje when it became apparent that no freely available disassembler was capable of correctly disassembling Galaksija's ROM. The 1.1.4 release adds options for better control of labels generated at the start and end of sections in the binaries. It also fixes a memory corruption bug that could sometimes lead to a wrong disassembly.

Actually, I committed these two changes to the public git repository three years ago. Unfortunately it seems that I have forgotten to package them into a new release at that time. Now I also took the opportunity to update and clean up the autotools setup. I'll work towards updating the z80dasm Debian package as well. z80dasm is pretty much feature complete at this point and except any further bug reports I don't plan any further development.

Posted by Tomaž | Categories: Code | Comments »

On "ap_pass_brigade failed"

25.05.2016 20:37

Related to my recent rant regarding the broken Apache 2.4 in Debian Jessie, another curious thing was the appearance of the following in /var/log/apache2/error.log after the upgrade:

[fcgid:warn] [pid ...:tid ...] (32)Broken pipe: [client ...] mod_fcgid: ap_pass_brigade failed in handle_request_ipc function, referer: ...

Each such error is also related to a 500 Internal Server Error HTTP response logged in the access log.

There's a lot of misinformation floating about this on the web. Contrary to the popular opinion, this is not caused by wrong values of various Fcgid... options or the PHP_FCGI_MAX_REQUESTS variable. Actually, I don't know much about PHP (which seems to be the primary use case for FCGI), but I do know how to read the mod_fcgid source code and this error seems to have a very simple cause: clients that close the connection before waiting for the server to respond.

The error is generated on line 407 of fcgid_bridge.c (mod_fcgid 2.3.9):

/* Now pass any remaining response body data to output filters */
if ((rv = ap_pass_brigade(r->output_filters,
                          brigade_stdout)) != APR_SUCCESS) {
        ap_log_rerror(APLOG_MARK, APLOG_WARNING, rv, r,
                      "mod_fcgid: ap_pass_brigade failed in "
                      "handle_request_ipc function");


The comment at the top already suggests the cause of the error message: failure to send the response generated by the FCGI script. The condition is easy to reproduce with a short Python script that sends a request and immediately closes the socket:

import socket, ssl

# path to some document generated by an FCGI script

ctx = ssl.create_default_context()
conn = ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=HOST)
conn.connect((HOST, 443))
conn.sendall("GET " + PATH + " HTTP/1.0\r\nHost: " + HOST + "\r\n\r\n")

Actually, you can do the same with a browser by mashing refresh and stop buttons. The success somewhat depends on how long the script takes to generate the response - for very fast scripts it's hard to tear down the connection fast enough.

Probably at some point ap_pass_brigade() returned ECONNABORTED when the client broke the connection, hence the if statement in the code above. It appears that now EPIPE is returned and mod_fcgid was not properly updated. I was testing this on apache2 2.4.10-10+deb8u4.

In any case, this error message is benign. Fiddling with the FcgidOutputBufferSize might cause the response to be sent out earlier and reduce the chance that this will be triggered by buggy crawlers and such, but in the end there is nothing you can do about it on the server side. The 500 response in the log is also clearly an artifact in this case, since it's the client that caused the error, not the server, and no error page was actually delivered.

Posted by Tomaž | Categories: Code | Comments »

Jessie upgrade woes

23.05.2016 19:59

Debian 8 (Jessie) was officially released a bit over a year ago. Previous January I mentioned I plan to upgrade my CubieTruck soon which in this case meant 16 months. Doesn't time fly when you're not upgrading software? In any case, here are some assorted notes regarding the upgrade from Debian Wheezy. Most of them are not CubieTruck specific, so I guess someone else might find them useful. Or entertaining.

Jessie armhf comes with kernel 3.16, which supports CubieTruck's Allwinner SoC and most of the peripherals I care about. However, it seems you can't use the built-in NAND flash for booting. It would be nice to get away from the sunxi 3.4 kernel and enjoy kernel updates through apt, but I don't want to get back to messing with SD cards. Daniel Andersen keeps the 3.4 branch reasonably up-to-date and Jessie doesn't seem to have problems with it, so I'll stick with that for the time being.


Dreaded migration to systemd didn't cause any problems, apart from having to migrate a couple of custom init.d scripts. The most noticeable change is a significant increase in the number of mounted tmpfs filesystems, which makes df output somewhat unwieldy and, by consequence, Munin's disk usage graphs a mess.

SpeedyCGI was a way of making dynamic web pages back in the olden days. In the best Perl tradition it tweaked some low-level parts of the language in order to avoid restarting the interpreter for each HTTP request - like automagically persisting global state and making exit() not actually exit. From a standpoint of a lazy web developer it was an incredibly convenient way to increase performance of plain old CGI scripts. But alas, it remained unmaintained for many years and was finally removed in Jessie.

FCGI and Apache's mod_fcgi (not to be confused with mod_fastcgi, it's non-free and slightly more broken cousin) seemed like natural replacements. While FCGI makes persistence explicit, the programming model is more or less the same and hence the migration required only some minor changes to my scripts - and working around various cases of FCGI's brain damage. Like for instance intentional ignorance of Perl's built-in Unicode support. Or the fact that gracefully stopping worker processes is more or less unsupported. In fact, FCGI's process management seems to be broken on multiple levels, as mod_fcgi has problems maintaining a stand-by pool of workers.

Perl despair

In any case, the new Apache 2.4 is a barrel of fun by itself. It changes the syntax for access control in such a way that config files need to be updated manually. It now also ignores all config files if they don't end in .conf. Incidentally, Apache will serve files from /var/www/html if it has no VirtualHosts defined. This seems to be a hard-coded default, so you can't find why it's doing that by grepping through /etc/apache2.

The default config in Jessie frequently warns about deadlocks in various places:

(35)Resource deadlock avoided: [client ...] mod_fcgid: can't lock process table in pid ...
(35)Resource deadlock avoided: AH00273: apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.
(35)Resource deadlock avoided: AH01948: Failed to acquire OCSP stapling lock

I'm currently using the following in apache2.conf, which so far seems to work around this problem:

# was: Mutex file:${APACHE_LOCK_DIR} default
Mutex sem default

Apache 2.4 in Jessie breaks HTTP ETag caching mechanism. If you're using mod_deflate (it's used by default to compress text-based content like HTML, CSS, RSS), browsers won't be getting 304 Not Modified responses, which means longer load times and higher bandwidth use. The workaround I'm using is the following in mods-available/deflate.conf (you need to also enable mod_headers):

Header edit "Etag" '^"(.*)-gzip"$' '"$1"'

This differs somewhat from the solution proposed in Apache's Bugzilla, but as far as I can see restores the old and tested behavior of Apache 2.2, even if it's not exactly up to HTTP specification.

I wonder whether this state of affairs means that everyone has moved on to nginx or these are just typical problems for a new major release. Anyway, to conclude on a more positive note, Apache now supports OCSP stapling, which is pretty simple to enable.

Finally, rsyslog is slightly broken in Jessie on headless machines that don't have an X server running. It spams the log with lines like:

rsyslogd-2007: action 'action 17' suspended, next retry is Sat May 21 18:12:53 2016 [try ]

This can be worked around by commenting-out the following lines in rsyslog.conf:

#       news.err;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       |/dev/xconsole
Posted by Tomaž | Categories: Code | Comments »

Display resolutions in QEMU in Windows 10 guest

21.01.2016 18:08

A while back I posted a recipe on how to add support for non-standard display resolutions to QEMU's stdvga virtual graphics card. For instance, I use that to add a wide-screen 1600x900 mode for my laptop. That recipe still works on Windows 7 guest and the latest QEMU release 2.5.0.

On Windows 10, however, the only resolution you can set with that setup is 1024x768. Getting it to work requires another approach. I should warn though that performance seems quite bad. Specifically, opening a web page that has any kind of dynamic elements on it can slow down the guest so much that just closing the offending window can take a couple of minutes.

First of all, the problem with Windows 10 being stuck at 1024x768 can be solved by switching the VGA BIOS implementation from the Bochs BIOS, which is shipped with QEMU upstream by default, to SeaBIOS. Debian packages switched to SeaBIOS in 1.7.0+dfsg-2, so if you are using QEMU packages from Jessie, you should already be able to set resolutions other than 1024x768 in Windows' Advanced display settings dialog.

If you're compiling QEMU directly from upstream, switching the VGA BIOS is as simple as replacing the vgabios-stdvga.bin file. Install the seabios package and run the following:

$ rm $PREFIX/share/qemu/vgabios-stdvga.bin
$ ln -s /usr/share/seabios/vgabios-stdvga.bin $PREFIX/share/qemu

where $PREFIX is the prefix you used when installing QEMU. If you're recompiling QEMU often, the following patch for QEMU's top-level Makefile does this automatically on make install:

--- qemu-2.5.0.orig/Makefile
+++ qemu-2.5.0/Makefile
@@ -457,6 +457,8 @@ ifneq ($(BLOBS),)
 	set -e; for x in $(BLOBS); do \
 		$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(qemu_datadir)"; \
+	rm -f "$(DESTDIR)$(qemu_datadir)/vgabios-stdvga.bin"
+	ln -s /usr/share/seabios/vgabios-stdvga.bin "$(DESTDIR)$(qemu_datadir)/vgabios-stdvga.bin"
 ifeq ($(CONFIG_GTK),y)
 	$(MAKE) -C po $@

Now you should be able to set resolutions other than 1024x768, but SeaBIOS still doesn't support non-standard resolutions like 1600x900 by default. For that, you need to amend the list of video modes and recompile SeaBIOS. Get the source (apt-get source seabios) and apply the following patch:

--- seabios-1.7.5.orig/vgasrc/bochsvga.c
+++ seabios-1.7.5/vgasrc/bochsvga.c
@@ -99,6 +99,9 @@ static struct bochsvga_mode
     { 0x190, { MM_DIRECT, 1920, 1080, 16, 8, 16, SEG_GRAPH } },
     { 0x191, { MM_DIRECT, 1920, 1080, 24, 8, 16, SEG_GRAPH } },
     { 0x192, { MM_DIRECT, 1920, 1080, 32, 8, 16, SEG_GRAPH } },
+    { 0x193, { MM_DIRECT, 1600, 900,  16, 8, 16, SEG_GRAPH } },
+    { 0x194, { MM_DIRECT, 1600, 900,  24, 8, 16, SEG_GRAPH } },
+    { 0x195, { MM_DIRECT, 1600, 900,  32, 8, 16, SEG_GRAPH } },
 static int dispi_found VAR16 = 0;

You probably want to also increment the package version to keep it from being overwritten next time you do apt-get upgrade. Finally, recompile the package with dpkg-buildpackage and install it.

Now when you boot the guest you should see your new mode appear in the list of resolutions. There is no need to recompile or reinstall QEMU again.

Posted by Tomaž | Categories: Code | Comments »

Conveniently sharing screenshots

20.09.2015 20:44

When collaborating remotely or when troubleshooting something, it's often useful to quickly share screenshots in a chat like IRC or instant messaging. There are cloudy services that can do image sharing and provide convenient applications that do that. However, if you have your own web server, it seems wasteful to ship screenshots overseas just to be downloaded a moment later by a colleague a few blocks away. Here is a simple setup that instead uses SSH to copy a screenshot to a folder on a web server and integrates nicely with a GNOME desktop.

First, you need a script that pushes files to a public_html folder (or something equivalent) on your web server and reports back the URL:


set -e

# Enter your server's details here...

for i in "$@"; do
	base=`basename "$i"`
	abs=`realpath "$i"`

	chmod o+r "$abs"
	scp -pC "$abs" "$SSH_DEST"

	zenity --title "Shared image" --info --text "<a href="\"$URL\"">$URL</a>"

I use Zenity to display the URL in a nice dialog box (apt-get install zenity). Also, don't forget to setup a passwordless SSH login to your web server using your public key.

Name this script share and put it in some convenient folder. A good place is ~/.local/bin, which will also put it in your PATH on most systems (so you can also conveniently share arbitrary files from your terminal). Don't forget to mark the script executable.

You now have to associate this script with some image MIME types. Put a file named share.desktop into ~/.local/share/applications with the following contents. Don't forget to correct the full path to your share script.

[Desktop Entry]
Name=Share image
Comment=Share an image in a public HTTP folder
Exec=/home/user/.local/bin/share %U

To check if this worked, open a file manager like Nautilus and right-click on some PNG file. It should now offer to open the file with Share image in addition to other applications usually used with images. If it doesn't work, you might need to run update-desktop-database ~/.local/share/applications.

Finally, install xfce4-screenshooter and associate it with the Print screen button on the keyboard. In GNOME, this can be done in the keyboard preferences:

Associating xfce4-screenshooter with Print screen key in GNOME.

Now, when you press Print screen and make a screenshot, you should have a Share image option under the Open with drop-down menu.

xfce4-screenshooter with "Share image" selected.

When you select it, a window with the URL should appear. You can copy the URL to the chat window, or click it to open it in your browser.

Zenity dialog with the URL of the shared screenshot.

Of course, if the folder you use on your web server is open to the public, so will be the screenshots you share. You might want to have the folder use a secret URL or setup basic HTTP authentication. In any case, it's good to apply some common sense when using your newly found screenshot sharing powers. It's also useful to have a cronjob (systemdjob?) on the server that cleans up the folder with shared images once in a while. This both prevents the web server's disk from filling up and stale images from lingering for too long on the web. Implementing that is left as an exercise for the reader.

Posted by Tomaž | Categories: Code | Comments »

Using Metex M-3860D from Python

08.08.2015 15:56

Metex M-3860D is a fairly old multimeter with a computer interface. It uses a RS-232 serial line, but requires a somewhat unusual setup. It does work with modern USB-to-serial converters, however a serial terminal application like cutecom won't work.

Here is a Python recipe using pyserial (replace /dev/ttyUSB0 with the path to your serial device):

import serial

# 1200 baud, 7 data bits, 2 stop bits, no parity, no flow control.
comm = serial.Serial("/dev/ttyUSB0", 1200, 7, "N", 2, timeout=1.)

# DTR line must be set to high, RTS line must be set to low.

# The instrument sends back currently displayed measurement each time
# you send 'D' over the line.

# This returns 14 characters that show type of measurement, value and
# unit - for example "DC  0.000   V\n".
print comm.readline()

The correct setting of the DTR and RTS modem lines is important. If you don't set them as shown here, you won't get any reply from the instrument. This is not explicitly mentioned in the manual. It's likely that the computer interface in the multimeter is in fact powered from these status lines.

Posted by Tomaž | Categories: Code | Comments »

Improving privacy with Iceweasel

06.08.2015 19:35

For several years now Debian has been shipping with a rebranded Firefox browser called Iceweasel. As far as I understand the matter, the reason is Mozilla's trademark policy which says that anything bearing the Firefox brand must be approved by them. Instead of dealing with the review process for each Debian-specific patch, Debian maintainers chose to strip the branding from the browser. This is something Mozilla's code actually makes provisions for.

The Iceweasel releases from the Debian Mozilla Team closely track the upstream. In contrast to other Firefox forks, changes to Iceweasel generally only improve integration with the Debian system. Packages released with the stable releases might also contain backported security fixes (although this might change soon).

A problem with Iceweasel is that it also identifies as such to all websites it loads through its User-Agent header. Iceweasel is quite a rarity among browsers, which makes its users easy to track across web pages. Currently EFF's Panopticlick shows that Iceweasel 39's User-Agent provides around 16 bits of identifying information. In contrast, a vanilla Firefox on Linux gives trackers around 11 bits. EFF's numbers change quite a lot over time though - I've seen 22 bits reported for Iceweasel a few weeks ago. In any case, if you have friends that like to watch Apache logs for fun, it's quite obvious when you're hitting their blogs if you are using Iceweasel.

In order to improve this situation I've created a very simple add-on for Iceweasel that removes the reference to Iceweasel from the User-Agent header. It consists only of a few lines of Javascript that run a regexp replace on the built-in User-Agent string and set the general.useragent.override preference. The User-Agent set this way should be identical to the vanilla Firefox of the same version running on Debian.

The extension is not on, so you will have to fetch it from GitHub and build it yourself (git clone followed by make should do the trick).

Thawed Weasel extension for Iceweasel

How well does it work? Panopticlick shows the expected reduction of around 5 to 6 bits of identifying information. Which might not matter if the site is actively trying to fingerprint you and can run Javascript - system fonts and browser plugins are still very unique for a typical Debian desktop. But at least you don't stick out from access.log as a sore thumb.

You might ask why not just use one of the myriad existing User-Agent overrider add-ons. The trick is that I have not found any that would allow you to apply a search-and-replace regexp on the built-in User-Agent string. Without that, you either have to manually keep it up-to-date with the actual browser version, or risk sporting a unique, out-dated User-Agent string once everyone else's browser has auto-updated. I don't want my computer to have another itch that needs regular scratching.

A related argument against this add-on would be that providing an accurate User-Agent string is good etiquette on the web. It helps administrators with browser usage statistics and debugging any browser-specific problems with their web-sites. Considering that the idea of Iceweasel is to have minimal changes against the upstream Firefox, I think it is still within the boundaries of good behavior to present itself as Firefox. Whether this argument is valid or not is up to debate though. At the time of writing, Iceweasel 39.0-1~bpo70+1 has 36 patches applied against the upstream Firefox source, touching around 1800 lines of code.

Finally, of course, you can just install Mozilla's Linux build of Firefox on Debian. I'm sticking with Iceweasel because I prefer software managed through the package manager instead of dumping tarballs into /usr/local. Adding another distribution's repositories into Debian's /etc/apt/sources.list is just wrong though.

Posted by Tomaž | Categories: Code | Comments »

From F-Spot to Shotwell in Jessie

14.07.2015 20:24

F-Spot photo manager was dropped in Debian Jessie in favor of Shotwell. While Shotwell supports automatically importing data from F-Spot out-of-the box, the import is far from perfect. Here are some assorted notes from migrating my library. Note that this applies specifically to the version in Jessie (0.20.1).

  • Shotwell does not support storing multiple versions of a photo (F-Spot created a new version of a photo for each edit in GIMP, for instance). On import, each version is stored as a separate photo in Shotwell.

  • The tag hierarchy is correctly imported from F-Spot. However sometimes a tag is imported twice: once in its correct position in the hierarchy and once at the top level. This is because F-Spot at some point started embedding tags (without hierarchy data) inside image files themselves in the Xmp.dc.subject field. Shotwell import treats the F-Spot hierarchy and the embedded tags separately, resulting in duplication.

    One suggestion is to remove the embedded tags before import. However, this further modifies files (which shouldn't be modified in the first place, but you can't do anything about that now). Upon removing the field, the exiv2 tool also seems to corrupt vendor-specific fields in JPEG (I don't think truncating the entry warnings are as benign as this thread suggests.

    A better way is to ignore the embedded tags on import. Apply this patch, recompile the Shotwell Debian package and use the modified version to import data from F-Spot:

    --- shotwell-0.20.1.orig/src/photos/PhotoMetadata.vala	2014-03-04 23:54:12.000000000 +0100
    +++ shotwell-0.20.1/src/photos/PhotoMetadata.vala	2015-07-12 13:11:34.021751079 +0200
    @@ -857,7 +857,6 @@
         private static string[] KEYWORD_TAGS = {
    -        "Xmp.dc.subject",

    After importing, you can revert back to the original version, shipped with Debian. The embedded tags are only read once on import.

  • Shotwell doesn't support GIF images. Any GIFs are skipped on import (and noted in the import log) As crazy as that sounds, I had some photos in my library from an old feature phone that saved photographs as GIF files. I converted them to PNG and manually re-imported.

  • Time and date information is not imported from F-Spot at all. Shotwell reads only the data from the EXIF JPEG header. If you adjusted creation time in F-Spot, those modifications will be lost.

    Unfortunately, F-Spot itself seems to have had problems with storing this data in its database. It looks like the timestamps have been stored in different formats over time without proper migrations between versions. Looking at my F-Spot library, various photos have +1, 0 and -1 hour offsets compared what they probably should be.

    I gave up on trying to correct the mess in the F-Spot library. I made a Python script that copied timestamps directly from the F-Spot SQLite database to the Shotwell database, but only when the difference was more than 1 hour. The script is quite hacky and probably too dangerous for general consumption. If you have similar issues and are not afraid of SQL, send me a mail and I'll share it.

In the end, Shotwell feels much more minimalist than F-Spot. Its user interface has its own set of quirks and leaves a lot to be desired, particularly regarding tag management. On the other hand, I have yet to see it crash, which was a common occurrence with F-Spot.

Posted by Tomaž | Categories: Code | Comments »