Recent free software work

30.07.2016 9:25

I've done a bit of open source janitorial work recently. Here is a short recap.


jsonmerge is a Python module for merging a series of JSON documents using arbitrarily complicated rules inspired by JSON schema. I have developed the gist of it with Sarah Bird during EuroPython 2014. Since then I've been maintaining it, but not really doing any further development. 1.2.1 release fixes a bug in internal handling of JSON references, reported by chseeling. The bug caused a RefResolutionError to be raised when merging properties with slash or tilde characters in them.

I believe jsonmerge is usable in its current form. The only larger problem that I know of is the fact that automatic schema generation for merged documents would need to be rethought and probably refactored. This would address incompatibility with jsonschema 2.5.0 and improve handling of some edge cases. get_schema() seems to be rarely used however. I don't have any plans to work on this issue at the moment as I'm not using jsonmerge myself. I would be happy to look into any pull requests or work on it myself if anyone would offer a suitable bounty.


aspell-sl is the Slovenian dictionary for GNU Aspell. Its Debian package was recently orphaned. As far as I know, this is currently the only Slovenian dictionary included in Debian. I took over as the maintainer of the Debian package and fixed several long-standing packaging bugs to prevent it from disappearing from next Debian stable release. I haven't updated the dictionary however. The word list, while usable, remains as it was since the last update somewhere in 2002.

The situation with this dictionary seems complicated. The original word list appears to have been prepared in 2001 or 2002 by a diverse group of people from JSI, LUGOS, University of Ljubljana and private companies. I'm guessing they were funded by the now-defunct Ministry of Information Society which was financing localization of open source projects around that time. The upstream web page is long gone. In fact, aspell itself doesn't seem to be doing that well, although I'm still a regular user. The only free and up-to-date Slovenian dictionary I've found on-line was in the Slovenian Dictionary Pack for LibreOffice. It seems the word list from there would require relatively little work to be adapted for GNU Aspell (Hunspell dictionaries use very similar syntax). However, the upstream source of data in the pack is unclear to me and I hesitate to mess too much with things I know very little about.


z80dasm is a disassembler for the Zilog Z80 microprocessor. I forked the dz80 project by Jan Panteltje when it became apparent that no freely available disassembler was capable of correctly disassembling Galaksija's ROM. The 1.1.4 release adds options for better control of labels generated at the start and end of sections in the binaries. It also fixes a memory corruption bug that could sometimes lead to a wrong disassembly.

Actually, I committed these two changes to the public git repository three years ago. Unfortunately it seems that I have forgotten to package them into a new release at that time. Now I also took the opportunity to update and clean up the autotools setup. I'll work towards updating the z80dasm Debian package as well. z80dasm is pretty much feature complete at this point and except any further bug reports I don't plan any further development.

Posted by Tomaž | Categories: Code | Comments »

Newsletters and other beasts

23.07.2016 10:23

It used to be that whenever someone wrote something particularly witty on Slashdot, it was followed by a reply along the lines of "I find your ideas intriguing and would like to subscribe to your newsletter". Ten-odd years later, it seems like the web took that old Simpsons joke seriously. These days you can hardly follow a link on Hacker News without having a pop-up thrown in your face. Most articles now end with a plea for an e-mail address, and I've even been to real-life talks where the speakers continuously advertised their newsletter to the audience.

Recently I've been asked several times why I didn't support subscriptions by e-mail, like every other normal website. The short answer is that I keep this blog in a state that I wish other websites I visit would adopt. This means no annoying advertisements, respecting your privacy by not loading third-party Javascript or tracking cookies, HTTPS and IPv6 support, valid XHTML... and good support for the Atom standard. Following the death of Google Reader, the world turned against RSS and Atom feeds. However, I still find them vastly more usable than any alternative. It annoys me that I can't follow interesting people and projects on modern sites like Medium and Hackaday.io through this channel.

Twitter printer at 32C3

That said, you now can subscribe by e-mail to my blog, should you wish to do so (see also sidebar top-right). The thing that finally convinced me to implement this was hearing that some of you use RSS-to-email services that add their own advertisements to my blog posts. I did not make this decision lightly though. I used to host mailing lists and know what an incredible time sink they can be, fighting spam, addressing complaints and so on. I don't have that kind of time anymore, so using an external mass-mailing service was the only option. Running my own mail server in this era is lunacy enough.

Mailchimp seems to be friendly, so I'm using that at the moment. If turns to the dark side and depending how popular the newsletter gets, I might move to some other service - or remove it altogether. For the time being I consider this an experiment. It's also worth mentioning that while there are no ads, Mailchimp does add mandatory tracking features (link redirects and tracking pixels). Of course, it also collects your e-mail address somewhere.

Since I'm on the topic of subscriptions and I don't like writing meta posts like this, I would also like to mention here two ways of following my posts that are not particularly well known: if you are only interested in one particular topic I write about, you can search for it. Search results page has an attached Atom feed you can follow that only contains posts related to the query. If you on the other hand believe that Twitter is the new RSS, feel free to follow @aviansblog (at least until Twitter breaks the API again).

Posted by Tomaž | Categories: Life | Comments »

Visualizing frequency allocations in Slovenia

15.07.2016 17:34

If you go to a talk about dynamic spectrum access, cognitive radio or any other topic remotely connected with the way radio spectrum is used or regulated, chances are one of the slides in the introduction will contain the following chart. The multitude of little colorful boxes is supposed to impress on the audience that the spectrum is overcrowded with existing allocations and that any future technology will have problems finding vacant frequencies.

I admit I've used it myself in that capacity a couple of times. "The only space left unallocated is beyond the top-left and bottom-right edges, below 9 kHz and above 300 GHz", I would say, "and those frequencies are not very useful for new developments.". After that I would feel free to advertise the latest crazy idea that will magically create more space, brushing away the fact that spectrum seems to be like IPv4 address space - when there's real need, powers that be always seem to find more of it.

United States frequency allocations, October 2003.

Image by U.S. Department of Commerce

I was soon getting a bit annoyed by this chart. When you study it you realize it's showing the wrong thing. The crowdiness of it does fit with the general story people are trying to tell, but the ITU categories shown are not the problematic part of the 100 year legacy of radio spectrum regulations. I only came to realize that later though. My first thought was "Why are people discussing future spectrum in Europe, using a ten year old chart from U.S. Department of Commerce showing the situation on the other side of the Atlantic?"

Although there is a handful of similar charts for other countries on the web, I couldn't find one that I would be happy with. So two years back, with a bit of free time and encouraged by the local Open Data group, I set off to make my own. It would show the right thing, be up to date and describe the situation in my home country. I downloaded public PDFs with the Slovenian Electronic Communications Act, the National Table of Frequency Allocations and also a few assorted files with individual frequency licenses. Then I started writing a parser that would turn it all into machine-readable JSON. Then, as you might imagine if you ever encountered the words "PDF", "table" and "parsing" in the same sentence, I gave up after a few days of writing increasingly convoluted and frustrating Python code.

Tabela uporabe radijskih frekvenc

Fast forward two years and I was again going through some documents that discuss these matters. I remembered this old abandoned project. In the mean time, new laws were made, the frequency allocation table was updated several times and the PDFs were structured a bit differently. This meant I had to throw away all my previous work, but on the other hand new documents looked a bit easier to parse. I again took the challenge and this time I managed to parse most of the basic NTFA into JSON after a day of work and about 350 lines of Python.

I won't dive deep into technicalities here. I started with the PDF converted to whitespace-formatted UTF-8 text using the pdftotext tool which comes with Poppler. Then I had a series of functions that successively turned text into structured data. This made it easy to inspect and debug each step. Some of the steps included were "fix typos in document" (there are several, by the way, including inconsistent use of points and commas for decimal marks), "extract column widths", "extract header hierarchy", "normalize service names", etc. If there will be interest, I might present the details in a talk at one of the future Open Data Meetups in Ljubljana.

Once I had the data in JSON, drawing a visualization much like the U.S. one above took another 250 lines using matplotlib. Writing them was much more pleasant in comparison though. In hindsight, it would actually make sense to do the visualization part first, since it was much easier to spot parsing mistakes from the graphical representation than by looking at JSON.

Uporaba radijskih frekvenc v Sloveniji glede na storitev

While it is both local and relatively up-to-date, the visualization as such isn't very good. It still only works for conveying that general idea of fragmentation, but not much else. There are way too many categories for an unaided eye to easily match the colors in the legend with the bands in the graph. It would be much better to have an interactive version where you could point to a band and the relevant information would pop-up (or maybe even point you to the relevant paragraphs in the PDFs). Unfortunately this is beyond my meager knowledge of Javascript frameworks.

My chart also still just lists the ITU categories. Not only do they have very little to do with finding space for future allocations, they are useless for spotting interesting parts of the spectrum. For example, the famous 2.4 GHz ISM band doesn't stand out in any way here - it's listed simply under "FIXED, MOBILE, AMATEUR and RADIOLOCATION" services. All such interesting details regarding licensing and technologies in individual bands is hidden in various regulations, scattered across a vast amount of tables, appendices and different PDF documents. It is often in textual form that currently seems impossible to easily extract in an automated way.

I'm still glad that I now have at least some of this data in computer-readable form. I'm sure it will come handy in other projects. For instance, I might eventually use it to add some automatic labels to my real-time UHF and VHF spectrogram from the roof of the IJS campus.

I will not be publicly publishing JSON data and parsing code at the moment. I have concerns about its correctness and the code is so specialized for the specific document that I'm sure nobody will find it useful for anything else. However, if you have some legitimate use for the data, please send me an e-mail and I will be happy to share my work.

Posted by Tomaž | Categories: Life | Comments »

Raspberry Pi Compute Module eMMC benchmarks

03.07.2016 13:52

I have a Raspberry Pi Compute Module development kit on my desk at the moment. I'm doing some testing and prototyping because we're considering using it for a project at the Institute. The Compute Module is basically a small PCB with the Broadcom's BCM2835 system-on-chip, 4 GB of flash ROM on an eMMC connection and little else. Even providing power supply at a number of different voltages is left as an exercise for the user.

Raspberry Pi Compute Module

I was wondering how the eMMC flash performs compared to the SD card on the more common Pies. I couldn't find any good benchmarks on the web. Wikipedia says that the latest eMMC standard rivals SATA speeds, but there's not much info around on what kind the Compute Module uses. I've used Samsung's ARM Chromebook with eMMC flash a while ago and that felt pretty fast. On the other hand, watching package updates scroll by on the Compute Module gave me a feeling that it's quite sluggish.

To get some more objective benchmark, I decided to compare the I/O performance with my Raspberry Pi Zero. Zero uses the same BCM2835 SoC, so the results should be somewhat comparable. I used the SD card that originally came with Zero preloaded with the Noobs distribution. It only has the raspberry logo printed on it, so I don't know the exact model or manufacturer. Both Compute Module and Zero were running the latest Raspbian Jessie.

One surprising discovery during this benchmark was that CPU on Zero runs between 700 MHz and 1 GHz while the Compute Module will only run at 700 MHz. These are the ranges detected at boot by bcm2835-cpufreq and default /boot/config.txt that came with the Raspbian image (i.e. no special overclocking). Because of this I performed the benchmarks on Zero at 700 MHz and 1 GHz.

For comparison, I also ran the same benchmark on my Cubietruck that has an Allwinner A20 system-on-chip with SATA-connected Samsung EVO 840 SSD and runs vanilla Debian Jessie.

This is the benchmark script I used. For each run, I chose the fastest result out of 5:

N=5

DEVICE=/dev/sda
#DEVICE=/dev/mmcblk0

I=0
while [ $I -lt $N ]; do
	hdparm -t $DEVICE
	I=$(($I+1))
done

I=0
while [ $I -lt $N ]; do
	hdparm -T $DEVICE
	I=$(($I+1))
done

I=0
while [ $I -lt $N ]; do
	dd if=/dev/zero of=tempfile bs=1M count=128 conv=fdatasync 2>&1
	I=$(($I+1))
done

I=0
while [ $I -lt $N ]; do
	echo 3 > /proc/sys/vm/drop_caches
	dd if=tempfile of=/dev/null bs=1M count=128 2>&1
	I=$(($I+1))
done

I=0
while [ $I -lt $N ]; do
	dd if=tempfile of=/dev/null bs=1M count=128 2>&1
	I=$(($I+1))
done

Here is write performance, as measured by dd. I wonder if dd figures are affected by filesystem fragmentation since it writes an actual file that might not be contiguous. I've been using Zero for a while with this Raspbian image while the Compute Module has been freshly re-imaged. Fragmentation shouldn't be as significant as with spinning disks, but it probably still has some effect.

Comparison of write performance.

Read performance, as measured by hdparm as well as dd. To remove the effect of cache when measuring with dd, I explicitly dropped kernel block device caches before each run.

Comparison of read performance.

From this it seems Compute Module's eMMC flash is slightly faster than the SD card, both on read and writes when comparing to Zero running at the same CPU clock frequency. It's interesting that Zero's results change significantly with CPU frequency, which seems to suggest that some part of SD card I/O is CPU bound. That said, performance seems to be somewhere roughly on the same order of magnitude. Cubietruck is significantly faster than both. In light of this result, it's sad that never versions of Cubieboard (and cheap ARM SoCs in general) dropped the SATA interface.

Finally, I tested block device cache performance. This more or less shows only RAM and CPU performance and shouldn't depend on storage speed.

Comparison of cached read performance.

Interestingly, Zero seems to be somewhat faster than the Compute Module at 700 MHz here. /proc/cpuinfo shows a different revision, although it's not clear to me whether that marks board revision or SoC revision. It might be that processors in Zero and Compute Module are not identical pieces of silicon.

In the end, I should note that these results are not super accurate. Complexities of I/O benchmarking on Linux aside, there are several things that might have affected the results. I already mentioned different filesystem state. A different SD card in Zero might give very different results (I didn't have a second empty card at hand to try that). While Raspberry Pies were idle during these tests, Cubietruck was running my web server and various other little tidbits that tend to accumulate on such machines.

Posted by Tomaž | Categories: Digital | Comments »