Going to 35c3

12.12.2018 19:03

Just a quick note that I've managed to get a ticket for this year's Chaos Communications Congress. So I'll be in Leipzig at the end of the month, in case anyone wants to have a chat. I don't have anything specific planned yet. I haven't been at the congress since 2015 and I don't know how much has changed in the past years since it moved from Hamburg. I've selected some interesting talks on Halfnarp, but I have a tentative plan to spend more time talking with people than camping in the lecture halls. Drop me an email or possibly tweet to @avian2 if you want to meet.

35c3 refreshing memories logo.

Posted by Tomaž | Categories: Life | Comments »

Notes on using GIMP and Kdenlive for animation

08.12.2018 12:01

This summer I made a 60-second hand-drawn animation using GIMP and Kdenlive. I thought I might write down some notes about the process I used and the software problems I had to work around. Perhaps someone else considering a similar endeavor will find them useful.

For sake of completion, the hardware I used for image and video editing was a Debian GNU/Linux system with 8 GB of RAM and an old quad-core i5 750 CPU. For drawing I used a 7" Wacom Intuos tablet. For some rough sketches I also used an old iPad Air 2 with Animation Creator HD and a cheap rubber finger-type stylus. Animation Creator was the only proprietary software I used.

Screenshot of the GIMP Onion Layers plug-in.

I drew all the final animation cels in GIMP 2.8.18, as packaged in Debian Stretch. I've used my Onion Layers plug-in extensively and added several enhancements to it as work progressed: It now has the option to slightly color tint next and previous frames to emphasize in which direction outlines are moving. I also added new shortcuts for adding a layer to all frames and adding a new frame. I found that GIMP will quite happily work with images with several hundreds of layers in 1080p resolution and in general I didn't encounter any big problems with it. The only thing I missed was an easier way to preview the animation with proper timings, although flipping through frames by hand using keyboard shortcuts worked quite well.

For some of the trickier sequences I used the iPad to draw initial sketches. The rubber stylus was much too imprecise to do final outlines with it, but I found drawing directly on the screen more convenient for experimenting. I later imported those sketches into GIMP using my Animation Creator import plug-in and then drew over them.

Comparison of different methods of coloring cels in GIMP.

A cel naively colored using Bucket Fill (left), manually using the Paintbrush Tool (middle) and using the Color Layer plug-in (right).

I've experimented quite a bit with how to efficiently color-in the finished outlines. Normal Bucket Fill leaves transparent pixels if used on soft anti-aliased outlines. The advice I got was to color the outlines manually with a Paintbrush Tool, but I found that much too time consuming. In the end I made a quick-and-ugly plug-in that helped automate the process somewhat. It used thresholding to create a sharp outline on a layer beneath the original outline. I then used Bucket Fill on that layer. This preserved somewhat the the quality of the initial outline.

I mostly used one GIMP XCF file per scene for cels and one file for the backgrounds (some scenes had multiple planes of backgrounds for parallax scrolling). I exported individual cels using the Export Layers into transparent-background PNG files. For the scene where I had two characters I later wished that I had one XCF file per character since that would make it easier to adjust timing.

I imported the background and cels into Kdenlive. Backgrounds were simple static Image Clips. For cels I mostly used Slideshow Clips with a small frame duration (e.g. 3 frame duration for 8 frames per second). For some scenes I instead imported individual cels separately as images and dragged them manually to the timeline if I wanted to adjust timing of individual cels. That was quite time consuming. The cels and backgrounds were composited using a large number of Composite & Transform transitions.

Kdenlive timeline for a single scene.

I was first using Kdenlive 16.12.2 as packaged by Debian but later found it too unstable. I switched to using the 18.04.1 AppImage release from the Kdenlive website. Switching versions was painful, since the project file didn't import properly in the newer version. Most transitions were wrong, so I had to redo much of the editing process.

I initially wanted to do everything in one monolithic Kdenlive project. However this proved to be increasingly inconvenient as work progressed. Even 18.04.1 was getting unstable with a huge number of clips and tracks on the timeline. I also was having trouble properly doing nice-looking dissolves between scenes that involved multiple tracks. Sometimes seemingly independent Composite & Transforms were affecting each other in unpredictable ways. So in the end I mostly ended up with one Kdenlive project per scene. I rendered each such scene to a lossless H.264 and then imported the rendered scenes into a master Kdenlive project for final editing.

Regarding Kdenlive stability, I had the feeling that it doesn't like Color Clips for some reason, or transitions to blank tracks. I'm not sure if there's really some bug or it was just my imagination (I couldn't reliably reproduce any crash that I encountered), but it seemed that the frequency of crashes went down significantly when I put one track with an all-black PNG at the bottom of my projects.

In general, the largest problem I had with Kdenlive was an issue with scaling. My Kdenlive project was 720p, but all my cels and backgrounds were in 1080p. It appeared that sometimes Composite & Transform would use 720p screen coordinates and sometimes 1080p coordinates in unpredictable ways. I think that the renderer implicitly scales down the bottom-most track to the project size, if at some point in time it sees a frame larger than the project size. In the end I couldn't figure the exact logic behind it and I had to resort to randomly experimenting until it worked. Having separate, simpler project files for each scene helped this significantly.

Comparison of Kdenlive scaling quality.

Frame scaled explicitly from 1080p to 720p using Composite & Transform (left) and scaled implicitly during rendering (right).

Another thing I noticed was the the implicit scaling of video tracks seemed to use lower-quality scaling algorithm than the Composite & Transform, resulting in annoying visible changes in image quality. In the end, I forcibly scaled all tracks to 720p using an extra Composite & Transform, even when one was not explicitly necessary.

Comparison of luminosity on transition to black using Composite & Transform and Fade to black.

Comparison of luminosity during a one second transition from a dark scene to black between Fade to black effect and Composite & Transform transition. Fade to black reaches black faster than it should, but luminosity does not jump up and down.

I was initially doing transitions between scenes using the Composite & Transform because I found adjusting the alpha values through keyframes more convenient than setting lengths of Fade to/from black effects. However I noticed that the Composite & Transform seems to have some kind of a rounding issue and transitions using it were showing a lot of bands and flickering in the final render. In the end I switched to using Dissolves and Fade to/from black which looked better.

Finally, some minor details about video encoding I learned. The H.264 quality setting in Kdenlive (CRF) is inverted. Lower values mean higher quality. H.264 uses YUV color space, while my drawings were in RGB color space. After rendering the final video some shades of gray got a bit of a green tint, which was due to the color space conversion in Kdenlive. As far as I know there is no way to help with that. GIMP and PNG only support RGB. In any case, that was quite minor and I was assured that I only saw it since I was starting at these drawings for so long.

"Home where we are" title card.

How to sum this up? Almost every artist I talked with recommended using proprietary animation software and was surprised when I told them what I'm using. I think in general that is reasonable advice (although the prices of such software seem anything but reasonable for a summer project). I was happy to spend some evenings writing code and learning GIMP internals instead of drawing, but I'm pretty sure that's more an exception than the rule.

There were certainly some annoyances that made me doubt my choice of software. Redoing all editing after switching Kdenlive versions was one of those things. Other things were perhaps just me being too focused on minor technical details. In any case, I remember doing some projects with Cinelerra many years ago and I think Kdenlive is a quite a significant improvement over it in terms of user interface and stability. Of course, neither GIMP nor Kdenlive were specifically designed for this kind of work (but I've heard Krita got native support for animations, so it might be worth checking out). If anything, the fact that it was possible for me to do this project shows how flexible open source tools can be, even when they are used in ways they were not meant for.

Posted by Tomaž | Categories: Life | Comments »

A summer animation project

30.11.2018 22:19

In December last year I went to an evening workshop on animation that was part of the Animateka festival in Ljubljana. It was fascinating to hear how a small group of local artists made a short animated film, from writing the script and making a storyboard to the final video. Having previously experimented with drawing a few seconds worth of animation in GIMP I was tempted to try at least once to go through this entire process and try to animate some kind of story. Not that I hoped to make anything remotely on the level that was presented there. They were professionals winning awards, I just wanted to do something for fun.

Unfortunately I was unable to attend the following workshops to learn more. At the time I was working for the Institute, wrapping up final months of an academic project, and at the same time just settling into an industry job. Between two jobs it was unrealistic to find space for another thing on my schedule.

I played with a few ideas here and there and thought I might prepare something to discuss at this year's GalaCon. During the Easter holidays I took a free week and being somewhat burned out I didn't have anything planned. So I used the spare time to step away from the life of an electrical engineer and come up with a short script based on a song I happened to randomly stumble upon on YouTube some weeks before. I tried to be realistic and stick as much as possible to the things I felt confident I could actually draw. By the end of the week I had a very rough 60-second animatic that I nervously shared around.

One frame of the animatic.

At the time I doubted I would actually do the final animation. I was encouraged by the responses I got to the script, but I didn't previously take notes on how much time it took me to do one frame of animation. So I wasn't sure how long it would take to complete a project like that. It just looked like an enormous pile of cels to draw. And then I found in my inbox a mail saying my scientific paper that I had unsuccessfully tried to publish for nearly 4 years got accepted pending a major revision and everything else was put on hold. It was another mad dash to repeat the experiments, process the measurements and catch the re-submission deadline.

By June my work at the Institute ended and I felt very much tired and disappointed and wanted to do something different. With a part of my week freed up I decided to spend the summer evenings working on this animation project I came up with during Easter. I went through some on-line tutorials to refresh my knowledge and, via a recommendation, bought the Animator's Survival Kit book, which naturally flew completely over my head. By August I was able to bother some con-goers in Ludwigsburg with a more detailed animatic and one fully animated scene. I was very grateful for the feedback I got there and found it encouraging that people were seeing some sense in all of it.

Wall full of animation scraps.

At the end of the summer I had the wall above my desk full of scraps and of course I was nowhere near finished. I underestimated how much work was needed and I was too optimistic in thinking that I will be able to dedicate more than a few hours per week on the project. I scaled back my expectations a bit and I made a new, more elaborate production spreadsheet. The spreadsheet said I'll finish in November, which in the end actually turned out to be a rather good estimate. The final render landed on my hard disk on the evening of 30 October.

Production spreadsheet for my animation project.

So here it is, a 60 second video that is the result of about 6 months of on-and-off evening work by a complete amateur. Looking at it one month later, it has unoriginal character design obviously inspired by the My Little Pony franchise. I guess not enough to appeal to the fans of that show and enough to repel everybody else. There's a cheesy semblance of a story. But it means surprisingly much to me and I probably would never finish it if I went with something more ambitious and original.

I've learned quite a lot about animation and video editing. I also wrote quite a bit of code for making this kind of animation possible using a completely free software stack and when time permits I plan to write another post about the technical details. Perhaps some part of my process can be reused by someone with a bit more artistic talent. In the end it was a fun, if eventually somewhat tiring, way to blow off steam after work and reflect on past decisions in life.

Home where we are

(Click to watch Home where we are video)

Posted by Tomaž | Categories: Life | Comments »

The case of disappearing PulseAudio daemon

03.11.2018 19:39

Recently I was debugging an unusual problem with the PulseAudio daemon. PulseAudio handles high-level details of audio recording, playback and processing. These days it is used by default in many desktop Linux distributions. The problem I was investigating was on an embedded device using Linux kernel 4.9. I relatively quickly I found a way to reproduce it. However finding the actual cause was surprisingly difficult and led me into the kernel source and learning about the real-time scheduler. I thought I might share this story in case someone else finds themselves in a similar situation.

The basic issue was that PulseAudio daemon occasionally restarted. This reset some run-time configuration that should not be lost while the device was operating and also broke connections to clients that were talking to the daemon. Restarts were seemingly connected to the daemon or the CPU being under load. For example, the daemon would restart if many audio files were played simultaneously on it. I could reproduce the problem on PulseAudio 12.0 using a shell script similar to the following:

n=0
while [ $n -lt 100 ]; do
	pacmd play-file foo.wav 0
	n=$(($n+1))
done

This triggers 100 playbacks of the foo.wav file at almost the same instant and would reliably make the daemon restart on the device. However I was sometimes seeing restarts with less than ten simultaneous audio plays. A similar script with 1000 plays would sometimes also cause a restart on my Intel-based laptop using the same PulseAudio version. This made it easier to investigate the problem since I didn't need to do the debugging remotely on the device.

syslog held some clues what was happening. PulseAudio process was apparently being sent the SIGKILL signal. systemd detected that and restarted the service shortly after:

systemd[1]: pulseaudio.service: Main process exited, code=killed, status=9/KILL
systemd[1]: pulseaudio.service: Unit entered failed state.
systemd[1]: pulseaudio.service: Failed with result 'signal'.

However, there were no other clues as to what sent the SIGKILL or why. Kernel log from dmesg had nothing whatsoever related to this. Increasing logging verbosity in systemd and PulseAudio showed nothing relevant. Attaching a debugger and strace to the PulseAudio process showed that the signal was received at seemingly random points in the execution of the daemon. This showed that the problem was not directly related to any specific line of code, but otherwise led me nowhere.

When I searched the web, all suggestions seemed to point to the kernel killing the process due to an out-of-memory condition. The kernel being the cause of the signal seemed reasonable, however OOM condition is usually clearly logged. Another guess was that systemd itself was killing the process after I learned about its resource control feature. This turned out to be a dead end as well.

The first real useful clue was when Gašper gave me a crash course on using perf. After setting up debugging symbols for the kernel and the PulseAudio binary, I used the following:

$ perf_4.9 record -g -a -e 'signal:*'
(trigger the restart here)
$ perf_4.9 script

This printed out a nice backtrace of the code that generated the problematic SIGKILL signal (among noise about uninteresting other signals being sent between processes on the system):

alsa-sink-ALC32 12510 [001] 24535.773432: signal:signal_generate: sig=9 errno=0 code=128 comm=alsa-sink-ALC32 pid=12510 grp=1 res=0
            7fffbae8982d __send_signal+0x80004520223d ([kernel.kallsyms])
            7fffbae8982d __send_signal+0x80004520223d ([kernel.kallsyms])
            7fffbaef0652 run_posix_cpu_timers+0x800045202522 ([kernel.kallsyms])
            7fffbaea7aa9 scheduler_tick+0x800045202079 ([kernel.kallsyms])
            7fffbaefaa80 tick_sched_timer+0x800045202000 ([kernel.kallsyms])
            7fffbaefa480 tick_sched_handle.isra.12+0x800045202020 ([kernel.kallsyms])
            7fffbaefaab8 tick_sched_timer+0x800045202038 ([kernel.kallsyms])
            7fffbaeebbfe __hrtimer_run_queues+0x8000452020de ([kernel.kallsyms])
            7fffbaeec2dc hrtimer_interrupt+0x80004520209c ([kernel.kallsyms])
            7fffbb41b1c7 smp_apic_timer_interrupt+0x800045202047 ([kernel.kallsyms])
            7fffbb419a66 __irqentry_text_start+0x800045202096 ([kernel.kallsyms])

alsa-sink-ALC32 is the thread in the PulseAudio daemon that is handling the interface between the daemon and the ALSA driver for the audio hardware. The stack trace shows that the signal was generated in the context of that thread, however the originating code was called from a timer interrupt, not a syscall. Specifically, the run_posix_cpu_timers function seemed to be the culprit. This was consistent with the random debugger results I saw before, since interrupts were not in sync with the code running in the thread.

Some digging later I found the following code that is reached from run_posix_cpu_timers via some static functions. Intermediate static functions probably got optimized away by the compiler and don't appear in the perf stack trace:

if (hard != RLIM_INFINITY &&
    tsk->rt.timeout > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) {
	/*
	 * At the hard limit, we just die.
	 * No need to calculate anything else now.
	 */
	__group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk);
	return;
}

Now things started to make sense. Linux kernel implements some limits on how much time a thread with real-time scheduling priority can use before cooperatively yielding the CPU to other threads and processes (via a blocking syscall for instance). If a thread hits this time limit it is silently sent the SIGKILL signal by the kernel. Kernel resource limits are documented in the setrlimit man page (the relevant limit here is RLIMIT_RTTIME). The PulseAudio daemon was setting the ALSA thread to real-time priority and it was getting killed under load.

Using real-time scheduling seems to be the default in PulseAudio 12.0 and the time limit set for the process is 200 ms. The limit for a running daemon can be inspected from shell using prlimit:

$ prlimit --pid PID | grep RTTIME
RTTIME     timeout for real-time tasks           200000    200000 microsecs

Details of real-time scheduling can be adjusted in /etc/pulse/daemon.conf, for instance with:

realtime-scheduling = yes
rlimit-rttime = 400000

Just to make sure that an RTTIME over-run indeed produces such symptoms I made the following C program that intentionally triggers it. Running the program showed that indeed the cause for the SIGKILL in this case isn't logged anywhere and produces a similar perf backtrace:

#include <stdio.h>
#include <sched.h>
#include <sys/resource.h>

int main(int argc, char** argv)
{
        struct sched_param sched_param;
        if (sched_getparam(0, &sched_param) < 0) {
                printf("sched_getparam() failed\n");
                return 1;
        }
        sched_param.sched_priority = sched_get_priority_max(SCHED_RR);
        if (sched_setscheduler(0, SCHED_RR, &sched_param)) {
		printf("sched_setscheduler() failed\n");
		return 1;
	}
        printf("Scheduler set to Round Robin with priority %d...\n",
			sched_param.sched_priority);
        fflush(stdout);

	struct rlimit rlimit;
	rlimit.rlim_cur = 500000;
	rlimit.rlim_max = rlimit.rlim_cur;

	if (setrlimit(RLIMIT_RTTIME, &rlimit)) {
		printf("setrlimit() failed\n");
		return 1;
	}
	printf("RTTIME limit set to %ld us...\n",
			rlimit.rlim_max);

	printf("Hogging the CPU...\n");
	while(1);
}

It would be really nice if kernel would log somewhere the reason for this signal, like it does with OOM. It might be that in this particular case developers wanted to avoid calling possibly expensive logging functions from interrupt context. On the other hand, it seems that kernel by default doesn't log any process kills at all due to resource limit over-runs. Failed syscalls can be logged using auditd, but that wouldn't help here as no syscalls actually failed.

As far as this particular PulseAudio application was concerned, there weren't really any perfect solutions. This didn't look like a bug in PulseAudio but rather a fact of life in a constrained environment with real-time tasks. The PulseAudio man page discusses some trade-offs of real-time scheduling (which is nice in hindsight, but you first have to know where to look). In my specific case, there were more or less only three possibilities of how to proceed:

  1. Disable RTTIME limit and accept that PulseAudio might freeze other processes on the device for an arbitrary amount of time,
  2. disable real-time scheduling and accept occasional skips in the audio due to other processes taking the CPU from PulseAudio for too long, or
  3. accept the fact that PulseAudio will restart occasionally and make other software on the device recover from this case as best as possible.

After considering implications to the functionality of the device I went with the last one in the end. I also slightly increased the default RTTIME limit so that restarts would be less common while still having an acceptable maximum response time for other processes.

Posted by Tomaž | Categories: Code | Comments »

Analyzing PIN numbers

13.10.2018 12:17

Since I already had a dump from haveibeenpwned.com on my drive from my earlier password check, I thought I could use this opportunity to do some more analysis on it. Six years ago DataGenetics blog posted a detailed analysis of 4-digit numbers that were found in password lists from various data breaches. I thought it would be interesting to try to reproduce some of their work and see if their findings still hold after a few years and with a significantly larger dataset.

DataGenetics didn't specify the source of their data, except that it contained 3.4 million four-digit combinations. Guessing from the URL, their analysis was published in September 2012. I've done my analysis on the pwned-passwords-ordered-by-hash.txt file downloaded from haveibeenpwned.com on 6 October (inside the 7-Zip archive the file had a timestamp of 11 July 2018, 02:37:47). The file contains 517.238.891 SHA1 hashes with associated frequencies. By searching for SHA1 hashes that correspond to 4-digit numbers from 0000 to 9999, I found that all of them were present in the file. Total sum of their frequencies was 14.479.676 (see my previous post for the method I used to search the file). Hence my dataset was roughly 4 times the size of DataGenetics'.

Here are the top 20 most common numbers appearing in the dump, compared to the rank on the top 20 list from DataGenetics:

nnew nold PIN frequency
1 1 1234 8.6%
2 2 1111 1.7%
3 1342 1.1%
4 3 0000 1.0%
5 4 1212 0.5%
6 8 4444 0.4%
7 1986 0.4%
8 5 7777 0.4%
9 10 6969 0.4%
10 1989 0.4%
11 9 2222 0.3%
12 13 5555 0.3%
13 2004 0.3%
14 1984 0.2%
15 1987 0.2%
16 1985 0.2%
17 16 1313 0.2%
18 11 9999 0.2%
19 17 8888 0.2%
20 14 6666 0.2%

This list looks similar to the results published DataGenetics. The first two PINs are the same, but the distribution is a bit less skewed. In their results, first four most popular PINs accounted for 20% of all PINs, while here they only make up 12%. It seems also that numbers that look like years (1986, 1989, 2004, ...) have become more popular. In their list the only two in the top 20 list were 2000 and 2001.

Cumulative frequency of PINs

DataGenetics found that number 2580 ranked highly in position 22. They concluded that this is an indicator that a lot of these PINs were originally devised on devices with numerical keyboards such as ATMs and phones (on those keyboards, 2580 is straight down the middle column of keys), even though the source of their data were compromised websites where users would more commonly use a 104-key keyboard. In the haveibeenpwned.com dataset, 2580 ranks at position 65, so slightly lower. It is still in top quarter by cumulative frequency.

Here are 20 least common numbers appearing in the dump, again compared to their rank on the bottom 20 list from DataGenetics:

nnew nold PIN frequency
9981 0743 0.00150%
9982 0847 0.00148%
9983 0894 0.00147%
9984 0756 0.00146%
9986 0934 0.00146%
9985 0638 0.00146%
9987 0967 0.00145%
9988 0761 0.00144%
9989 0840 0.00142%
9991 0835 0.00141%
9990 0736 0.00141%
9993 0742 0.00139%
9992 0639 0.00139%
9994 0939 0.00132%
9995 0739 0.00129%
9996 0849 0.00126%
9997 0938 0.00125%
9998 0837 0.00119%
9999 9995 0738 0.00108%
10000 0839 0.00077%

Not surprisingly, most numbers don't appear in both lists. Since these have the lowest frequencies it also means that the smallest changes will significantly alter the ordering. The least common number 8068 in DataGenetics' dump is here in place 9302, so still pretty much at the bottom. I guess not many people choose their PINs after the iconic Intel CPU.

Here is a grid plot of the distribution, drawn in the same way as in the DataGenetics' post. Vertical axis depicts the right two digits while the horizontal axis depicts the left two digits. The color shows the relative frequency in log scale (blue - least frequent, yellow - most frequent).

Grid plot of the distribution of PINs

Many of the same patterns discussed in the DataGenetics' post are also visible here:

  • The diagonal line shows popularity of PINs where left two and right two digits repeat (pattern like ABAB), with further symmetries superimposed on it (e.g. AAAA).

  • The area in lower left corner shows numbers that can be interpreted as dates (vertically MMDD and horizontally DDMM). The resolution is good enough that you can actually see which months have 28, 30 or 31 days.

  • The strong vertical line at 19 and 20 shows numbers that can be interpreted as years. The 2000s are more common in this dump. Not surprising, since we're further into the 21st century than when DataGenetics' analysis was done.

  • Interestingly, there is a significant shortage of numbers that begin with 0, which can be seen as a dark vertical stripe on the left. A similar pattern can be seen in DataGenetics' dump although they don't comment on it. One possible explanation would be if some proportion of the dump had gone through a step that stripped leading zeros (such as a conversion from string to integer and back, maybe even an Excel table?).

In conclusion, the findings from DataGenetics' post still mostly seem to hold. Don't use 1234 for your PIN. Don't choose numbers that have symmetries in them or have years or dates in them. These all significantly increase the chances that someone will be able to guess them. And of course, don't re-use your SIM card or ATM PIN as a password on websites.

Another thing to note is that DataGenetics concluded that their analysis was possible because of leaks of clear-text passwords. However, PINs provide a very small search space of only 10000 possible combinations. It was trivial for me to perform this analysis even though haveibeenpwned.com dump only provides SHA-1 hashes, and not clear-text. With a warmed-up disk cache, the binary search only took around 30 seconds for all 10000 combinations.

Posted by Tomaž | Categories: Life | Comments »

Checking passwords against haveibeenpwned.com

07.10.2018 11:34

haveibeenpwned.com is a useful website that aggregates database leaks with user names and passwords. They have an on-line form where you can check whether your email has been seen in publicly available database dumps. The form also tells you in which dumps they have seen your email. This gives you a clue which of your passwords has been leaked and should be changed immediately.

For a while now haveibeenpwned.com reported that my email appears in a number of combined password and spam lists. I didn't find that surprising. My email address is publicly displayed on this website and is routinely scraped by spammers. If there was any password associated with it, I thought it has come from the 2012 LinkedIn breach. I knew my old LinkedIn password has been leaked, since scam mails I get are commonly mentioning it.

Screenshot of a scam email.

However, it came to my attention that some email addresses are in the LinkedIn leak, but not in these combo lists I appear in. This seemed to suggest that my appearance in those lists might not only be due to the old LinkedIn breach and that some of my other passwords could have been compromised. I thought it might be wise to double-check.

haveibeenpwned.com also provides an on-line form where they can directly check your password against their database. This seems a really bad practice to encourage, regardless of their assurances of security. I was not going to start sending off my passwords to a third-party. Luckily the same page also provides a dump of SHA-1 hashed passwords you can download and check locally.

I used Transmission to download the dump over BitTorrent. After uncompressing the 7-Zip file I ended up with a 22 GB text file with one SHA-1 hash per line:

$ head -n5 pwned-passwords-ordered-by-hash.txt
000000005AD76BD555C1D6D771DE417A4B87E4B4:4
00000000A8DAE4228F821FB418F59826079BF368:2
00000000DD7F2A1C68A35673713783CA390C9E93:630
00000001E225B908BAC31C56DB04D892E47536E0:5
00000006BAB7FC3113AA73DE3589630FC08218E7:2

I chose the ordered by hash version of the file, so hashes are alphabetically ordered. The number after the colon seems to be the number of occurrences of this hash in their database. More popular passwords will have a higher number.

The alphabetical order of the file makes it convenient to do an efficient binary search on it as-is. I found the hibp-pwlookup tool tool for searching the password database, but that requires you to import the data into PostgreSQL, so it seems that the author was not aware of this convenience. In fact, there's already a BSD command-line tool that knows how to do binary search on such text files: look (it's in the bsdmainutils package on Debian).

Unfortunately the current look binary in Debian is somewhat broken and bails out with File too large error on files larger than 2 GB. It needs recompiling with a simple patch to its Makefile to work on this huge password dump. After I fixed this, I was able to quickly look up the hashes:

$ ./look -b 5BAA61E4 pwned-passwords-ordered-by-hash.txt
5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8:3533661

By the way, Stephane on the Debian bug tracker also mentions a method where you can search the file without uncompressing it first. Since I already had it uncompressed on the disk I didn't bother. Anyway, now I had to automate the process. I used a Python script similar to the following. The check() function returns True if the password in its argument is present in the database:

import subprocess
import hashlib

def check(passwd):
	s = passwd.encode('ascii')
	h = hashlib.sha1(s).hexdigest().upper()
	p = "pwned-passwords-ordered-by-hash.txt"

	try:
		o = subprocess.check_output([
			"./look",
			"-b",
			"%s:" % (h,),
			p])
	except subprocess.CalledProcessError as exc:
		if exc.returncode == 1:
			return False
		else:
			raise

	l = o.split(b':')[1].strip()

	print("%s: %d" % (
		passwd,
		int(l.decode('ascii'))))

	return True

def main():
	check('password')

main()

Before I was able to check those passwords I keep stored in Firefox, I stumbled upon another hurdle. Recent versions do not provide any facility for exporting passwords in the password manager. There are some third-party tools for that, but I found it hard to trust them. I also had my doubts on how complete they are: Firefox has switched many mechanisms for storing passwords over time and re-implementing the retrieval for all of them seems to be a non-trivial task.

In the end, I opted to get them from Browser Console using the following Javascript, written line-by-line into the console (I adapted this code from a post by cor-el on Mozilla forums):

var tokendb = Cc["@mozilla.org/security/pk11tokendb;1"].createInstance(Ci.nsIPK11TokenDB);
var token = tokendb.getInternalKeyToken();
token.login(true);

var passwordmanager = Cc["@mozilla.org/login-manager;1"].getService(Ci.nsILoginManager);
var signons = passwordmanager.getAllLogins({});
var json = JSON.stringify(signons, null, 1);

I simply copy-pasted the value of the json variable here into a text editor and saved it to a file. I used an encrypted volume, so passwords didn't end up in clear-text on the drive.

I hope this will help anyone else to check their passwords against the database of leaks without exposing them to a third-party in the process. Fortunately for me, this check didn't reveal any really nasty surprises. I did find more hits in the database than I'm happy with, however all of them were due to certain poor password policies about which I can do very little.

Posted by Tomaž | Categories: Life | Comments »

Extending GIMP with Python talk

22.09.2018 18:47

This Thursday I presented a talk on writing GIMP plug-ins in Python at the monthly Python Meetup in Ljubljana. I gave a brief guide on how to get started and shared some tips that I personally would find useful when I first started playing with GIMP. I didn't go into the details of the various functions but tried to keep it high-level, showing most of the things live in the Python REPL window and demoing some of the plug-ins I've written.

I wanted to give this talk for two reasons: first was that I struggled to find such a high-level and up-to-date introduction into writing plug-ins. I though it would be useful for anyone else diving into GIMP's Python interface. I tried to prepare slides so that they can serve as a useful document on their own (you can find the slides here, and the example plug-in on GitHub). The slides also include some useful links for further reading.

Annotated "Hello, World!" GIMP plug-in.

The second reason was that I was growing unhappy with topics presented at these meetups. It seemed to me that recently most were revolving about web and Python at the workplace. I can understand why this is so. The event needs sponsors and they want to show potential new hires how awesome it is to work with them. The web is eating the world, Python meetups or otherwise, and it all leads to a kind of a monotonic series of events. So it's been a while since I've heard a fun subject discussed and I thought I might stir things up a bit.

To be honest, initially I was quite unsure whether this was good venue for the talk. I discussed the idea with a few friends that are also regular meetup visitors and their comments encouraged me to go on with it. Afterwards I was immensely surprised at the positive response. I had prepared a short tour of GIMP for the start of the talk, thinking that most people would not be familiar with it. It turned out that almost everyone in the audience used GIMP, so I skipped it entirely. I was really happy to hear that people found the topic interesting and it reminded me what a pleasure it is to give a well received talk.

Posted by Tomaž | Categories: Life | Comments »

Getting ALSA sound levels from a command-line

01.09.2018 14:33

Sometimes I need to get an idea of signal levels coming from an ALSA capture device, like a line-in on a sound card, and I only have a ssh session available. For example, I've recently been exploring a case of radio-frequency interference causing audible noise in an audio circuit of an embedded device. It's straightforward to load up raw samples into a Jupyter notebook or Audacity and run some quick analysis (or simply listen to the recording on headphones). But what I've really been missing is just a simple command-line tool that would show me some RMS numbers. This way I wouldn't need to transfer potentially large wav files around quite so often.

I felt like a command-line VU meter was something that should have already existed, but finding anything like that on Google turned out elusive. Overtime I've ended up with a bunch of custom half-finished Python scripts. However Python is slow, especially on low-powered ARM devices, so I was considering writing a more optimized version in C. Luckily I've recently come across this recipe for sox which does exactly what I want. Sox is easily apt-gettable on all Debian flavors I commonly care about and even on slow CPUs the following doesn't take noticeable longer than it takes to record the data:

$ sox -q -t alsa -d -n stats trim 0 5
             Overall     Left      Right
DC offset  -0.000059 -0.000059 -0.000046
Min level  -0.333710 -0.268066 -0.333710
Max level   0.273834  0.271820  0.273834
Pk lev dB      -9.53    -11.31     -9.53
RMS lev dB    -25.87    -26.02    -25.73
RMS Pk dB     -20.77    -20.77    -21.09
RMS Tr dB     -32.44    -32.28    -32.44
Crest factor       -      5.44      6.45
Flat factor     0.00      0.00      0.00
Pk count           2         2         2
Bit-depth      15/16     15/16     15/16
Num samples     242k
Length s       5.035
Scale max   1.000000
Window s       0.050

This captures 5 seconds of audio (trim 0 5) from the default ALSA device (-t alsa -d), runs it through the stats filter and discards the samples without saving them to a file (-n). The stats filter calculates some useful statistics and dumps them to standard error. A different capture device can be selected through the AUDIODEV environment variable.

The displayed values are pretty self-explanatory: min and max levels show extremes in the observed samples, scaled to ±1 (so they are independent of the number of bits in ADC samples). Root-mean-square (RMS) statistics are scaled so that 0 dB is the full-scale of the ADC. In addition to overall mean RMS level you also get peak and trough values measured over a short sliding window (length of this window is configurable, and is shown on the last line of the output). This gives you some idea of the dynamics in the signal as well as the overall volume. Description of other fields can be found in the sox(1) man page.

Posted by Tomaž | Categories: Code | Comments »

Accuracy of a cheap USB multimeter

12.08.2018 11:08

Some years ago I bought this cheap USB gadget off Amazon. You connect it in-line with a USB-A device and it shows voltage and current on the power lines of a USB bus. There were quite a few slightly different models available under various brands. Apart from Muker mine doesn't have any specific model name visible (a similar product is currently listed as Muker V21 on Amazon). I chose this particular model because it looked like it had a nicer display than the competition. It also shows the time integral of the current to give an estimate of the charge transferred.

Muker V21 USB multimeter.

Given that it cost less than 10 EUR when I bought it I didn't have high hopes as to its accuracy. I found it useful to quickly check for life signs in a USB power supply, and to check whether a smartphone switches to fast charge or not. However I wouldn't want to use it for any kind of measurement where accuracy is important. Or would I? I kept wondering how accurate it really is.

For comparison I took a Voltcraft VC220 multimeter. True, this is not a particularly high-precision instrument either, but it does come with a list of measurement tolerances in its manual. At least I can check whether the Muker's readings fall with-in the error range of VC220.

My first test measured open-circuit voltage. Muker's input port was connected to a lab power supply and the output port was left empty. I recorded the voltage displayed by the VC220 and Muker for a range of set values (VC220 was measuring voltage on the input port). Below approximately 3.3 V Muker's display went off, so that seems to be the lowest voltage that can be measured. On the graph below you can see a comparison. Error bars show the VC220's measurement tolerance for the 20 V DC range I used in the experiment.

Comparison of voltage measurements.

In the second test I connected an electronic DC load to Muker's output port and varied the load current. I recorded the current displayed by the VC220 (on the output port) and Muker. Again, error bars on the graph below show the maximum VC220 error as per its manual.

Comparison of current measurements.

Finally, I was interested in how much resistance Muker adds onto the supply line. I noticed that some devices will not fast charge when Muker is connected in-line, but will do so when connected directly to the charger. The USB data lines seems to pass directly through the device, so USB DCP detection shouldn't be the cause of that.

I connected Muker with a set of USB-A test leads to a lab power supply and a DC load. The total cable length was approximately 90 cm. With 2.0 A of current flowing through the setup I measured the voltages on both sides of the leads. I did the measurement with the Muker connected in the middle of the leads and with just the two leads connected together:

With Muker Leads only
Current [A] 2.0 2.0
Voltage, supply side [V] 5.25 5.25
Voltage, load side [V] 3.61 4.15
Total resistance [mΩ] 820 550

Muker V21 connected to USB test leads.

In summary, the current displayed by the Muker device seems to be reasonably correct. Measurements fell well with-in the tolerances of the VC220 multimeter, with the two instruments deviating for less than 20 mA in most of the range. Only at 1.8 A and above did the difference start to increase. Voltage readings seem much less accurate and Muker measurements appeared too high. However they did fall with-in the measurement tolerances between 4.2 and 5.1 V.

In regard to increased resistance of the power supply, it seems that Muker adds about 270 mΩ to the supply line. I suspect a significant part of that is contact resistance of the one additional USB-A connector pair in the line. The values I measured did differ quite a lot if I wiggled the connectors.

Posted by Tomaž | Categories: Analog | Comments »

Investigating the timer on a Hoover washing machine

20.07.2018 17:47

One day around last Christmas I stepped into the bathroom and found myself standing in water. The 20-odd years old Candy Alise washing machine finally broke a seal. For some time already I was suspecting that it wasn't rinsing the detergent well enough from the clothes and it was an easy decision to just scrap it. So I spent the holidays shopping for a new machine as well as taking apart and reassembling bathroom furniture that was in the way, and fixing various collateral damage on plumbing. A couple of weeks later and thanks to copious amounts of help from my family I was finally able to do laundry again without flooding the apartment in the process.

I bought a Hoover WDXOC4 465 AC combined washer-dryer. My choice was mostly based on the fact that a local shop had it on stock and its smaller depth compared to the old Candy meant a bit more floor space in the small bathroom. Even though I was trying to avoid it, I ended up with a machine with capacitive touch buttons, a NFC interface and a smartphone app.

Front panel on a Hoover WDXOC4 465 AC washer.

After half a year the machine works reasonably well. I gave up on Hoover's Android app the moment it requested to read my phone's address book, but thankfully all the features I care about can be used through the front panel. Comments I saw on the web that the dryer doesn't work have so far turned out to be unfounded. My only complaint is that 3 or 4 times it happened that the washer didn't do a spin cycle when it should. I suspect that the quality of embedded software is less then stellar. Sometimes I hear the inlet valve or the drum motor momentarily stop and engage again and I wonder if that's due to a bug or controller reset or if they are meant to work that way.

Anyway, as most similar machines, this one displays the remaining time until the end of the currently running program. There's a 7-segment display on the front panel that counts down hours and minutes. One of the weirder things I noticed is that the countdown seems to use Microsoft minutes. I would look at the display to see how many minutes are left, and then when I looked again later it would be seem to be showing more time remaining, not less. I was curious how that works and whether it was just my bad memory or whether the machine was really showing bogus values, so I decided to do an experiment.

I took a similar approach to my investigation of the water heater years ago. I loaded up the machine and set it up for a wash-only cycle (6 kg cottons program, 60°C, 1400 RPM spin cycle, stain level 2) and recorded the display on the front panel with a laptop camera. I then read out the timer values from the video with a simple Python program and compared them to the frame timestamps. The results are below: The real elapsed time is on the horizontal axis and the display readout is on the vertical axis. The dashed gray line shows what the timer should ideally display if the initial estimate would be perfect:

Countdown display on a Hoover washer versus real time.

The timer first shows a reasonable estimate of 152 minutes at the start of the program. The filled-in area on the graph around 5 minutes in is when the "Kg Mode" is active. At that time the display shows a little animation as the washer supposedly weighs the laundry. My OCR program got confused by that, so the time values there are invalid. However, after the display returns to the countdown, the time displayed is significantly lower, at just below 2 hours.

"Kg Mode" description from the Hoover manual.

Image by Candy Hoover Group S.r.l.

That seems fine at first - I didn't use a full 6 kg load, so it's reasonable to believe that the machine adjusted the wash time accordingly. However the minutes on the display then count down slower than real time. So much slower in fact that they more than make up for the difference and the machine ends the cycle after 156 minutes, 4 minutes later than the initial estimate.

You can also see that the display jumps up and down during the program, with the largest jump around 100 minutes in. Even when it seems to be running consistently, it will often count a minute down, then up and down again. You can see evidence of that on the graph below that shows differences on the timer versus time. I've verified this visually on the video and it's not an artifact of my OCR.

Timer display differences on a Hoover washer.

Why is it doing that? Is it just buggy software, an oscillator running slow or maybe someone figured that people would be happier when a machine shows a lower number and then slows down the passing minutes? Hard to say. Even if you don't tend to take videos of your washing machine, it's hard not to notice that the timer stands still at the last 1 minute for around 7 minutes.

Incidentally, does anyone know how the weighing function works? I find it hard to believe that they actually measure the weight with a load cell. They must have some cheaper way, perhaps by measuring the time it takes for water level rise up in the drum or something like that. The old Candy machine also advertised this feature and considering that was an electro-mechanical affair without sophisticated electronics it must be a relatively simple trick.

Pixel sampling locations for the 7-segment display OCR.

In conclusion, a few words on how I did the display OCR. I started off with the seven-segment-ocr project by Suyash Kumar I found on GitHub. Unfortunately it didn't work for me. It uses a very clever trick to do the OCR - it just looks at intensity profiles of two horizontal and one vertical line per character per frame. However because my video had low resolution and was recorded at a slight angle no amount of tweaking worked. In the end I just hacked up a script that sampled 24 hand-picked single pixels from the video. It then compared them to a hand-picked threshold and decoded the display values from there. Still, Kumar's project came in useful since I could reuse all of his OpenCV boilerplate.

Recording of a digital clock synchronized to the DCF77 signal.

Since I like to be sure, I also double-checked that my method of getting real time from video timestamps is accurate enough. Using the same setup I recorded a DCF77-synchronized digital clock for two hours. I estimated the relative error between video timestamps and the clock to be -0.15%, which means that after two hours, my timings were at most 11 seconds off. The Microsoft-minute effects I found in Hoover are much larger than that.

Posted by Tomaž | Categories: Life | Comments »