## Faking adjustment layers with GIMP layer modes

12.03.2018 19:19

Drawing programs usually use a concept of layers. Each layer is like a glass plate you can draw on. The artwork appears in the program's window as if you were looking down through the stack of plates, with ink on upper layers obscuring that on lower layers.

Adobe Photoshop extends this idea with adjustment layers. These do not add any content, but instead apply a color correction filter to lower layers. Adjustment layers work in real-time, the color correction gets applied seamlessly as you draw on layers below.

Support for adjustment layers in GIMP is a common question. GIMP does have a Color Levels tool for color correction. However, it can only be applied on one layer at a time. Color Levels operation is also destructive. If you apply it to a layer and want to continue drawing on it, you have to also adjust the colors of your brushes if you want to be consistent. This is often hard to do. It mostly means that you need to leave the color correction operation for the end, when no further changes will be made to the drawing layers and you can apply the correction to a flattened version of the image.

Adjustment layers are currently on the roadmap for GIMP 3.2. However, considering that Debian still ships with GIMP 2.8, this seems like a long way to go. Is it possible to have something like that today? I found some tips on the web on how to fake the adjustment layers using various layer modes. But these are very hand-wavy. Ideally what I would like to do is perfectly replicate the Color Levels dialog, not follow some vague instructions on what to do if I want the picture a bit lighter. That post did give me an idea though.

Layer modes allow you to perform some simple mathematical operations between pixel values on layers, like addition, multiplication and a handful of others. If you fill a layer with a constant color and set it to one of these layer modes, you are in fact transforming pixel values from layers below using a simple function with one constant parameter (the color on the layer). Color Levels operation similarly transforms pixel values by applying a function. So I wondered, would it be possible to combine constant color layers and layer modes in such a way as to approximate arbitrary settings in the color levels dialog?

If we look at the color levels dialog, there are three important settings: black point (b), white point (w) and gamma (g). These can be adjusted for red, green and blue channels individually, but since the operations are identical and independent, it suffices to focus a single channel. Also note that GIMP performs all operations in the range of 0 to 255. However, the calculations work out as if it was operating in the range from 0 to 1 (effectively a fixed point arithmetic is used with a scaling factor of 255). Since it's somewhat simpler to demonstrate, I'll use the range from 0 to 1.

Let's first ignore gamma (leave it at 1) and look at b and w. Mathematically, the function applied by the Color Levels operation is:

y = \frac{x - b}{w - b}

where x is the input pixel value and y is the output. On a graph it looks like this (you can also get this graph from GIMP with the Edit this Settings as Curves button):

Here, the input pixel values are on the horizontal axis and output pixel values are on the vertical. This function can be trivially split into two nested functions:

y = f_2(f_1(x))
where
f_1(x) = x - b
f_2(x) = \frac{x}{w-b}

f1 shifts the origin by b and f2 increases the slope. This can be replicated using two layers on the top of the layers stack. GIMP documentation calls these masking layers:

• Layer mode Subtract, filled with pixel value b,
• Layer mode Divide, filled with pixel value w - b

Note that values outside of the range 0 - 1 are clipped. This happens on each layer, which makes things a bit tricky for more complicated layer stacks.

The above took care of the black and white point settings. What about gamma adjustment? This is a non-linear operation that gives more emphasis to darker or lighter colors. Mathematically, it's a power function with a real exponent. It is applied on top of the previous linear scaling.

y = x^g

GIMP allows for values of g between 0.1 and 10 in the Color Levels tool. Unfortunately, no layer mode includes an exponential function. However, the Soft light mode applies the following equation:

R = 1 - (1 - M)\cdot(1-x)
y = ((1-x)\cdot M + R)\cdot x

Here, M is the pixel value of the masking layer. If M is 0, this simplifies to:

y = x^2

So by stacking multiple such masking layers with Soft light mode, we can get any exponent that is a multiple of 2. This is still not really useful though. We want to be able to approximate any real gamma value. Luckily, the layer opacity setting opens up some more options. Layer opacity p (again in the range 0 - 1) basically does a linear combination of the original pixel value and the masked value. So taking this into account we get:

y = (1-p)x + px^2

By stacking multiple masking layers with opacity, we can get a polynomial function:

y = a_1 x + a_2 x^2 + x_3 x^3 + \dots

By carefully choosing the opacities of masking layers, we can manipulate the polynomial coefficients an. Polynomials of a sufficient degree can be a very good approximation for a power function with g > 1. For example, here is an approximation using the above method for g = 3.33, using 4 masking layers:

What about the g < 1 case? Unfortunately, polynomials don't give us a good approximation and there is no channel mode that involves square roots or any other usable function like that. However, we can apply the same principle of linear combination with opacity settings to multiple saturated divide operations. This effectively makes it possible to piecewise linearly approximate the exponential function. It's not as good as the polynomial, but with enough linear segments it can get very close. Here is one such approximation, for g = 0.33, again using 4 masking layers:

To test this all together in practice I've made a proof-of-concept GIMP plug-in that implements this idea for gray-scale images. You can get it on GitHub. Note that it needs a relatively recent Scipy and Numpy versions, so if it doesn't work at first, try upgrading them from PyPi. This is how the adjustment layers look like in the Layers window:

Visually, results are reasonably close to what you get through the ordinary Color Layers tool, although not perfectly identical. I believe some of the discrepancy is caused by rounding errors. The following comparisons show a black-to-white linear gradient with a gamma correction applied. First stripe is the original gradient, second has the gamma correction applied using the built-in Color Levels tool and the third one has the same correction applied using the fake adjustment layer created by my plug-in:

How useful is this in practice? It certainly has the non-destructive, real-time property of the Photoshop's adjustment layer. The adjustment is immediately visible on any change in the drawing layers. Changing the adjustment settings, like the gamma value, is somewhat awkward, of course. You need to delete the layers created by the plug-in and create a new set (although an improved plug-in could assist in that). There is no live preview, like in the Color Levels dialog, but you can use Color Levels for preview and then copy-paste values into the plug-in. The multiple layers also clutter the layer list. Unfortunately it's impossible to put them into a layer group, since layer mode operations don't work on layers outside of a group.

My current code only works on gray-scale. The polynomial approximation uses layer opacity setting, and layer opacity can't be set separately for red, green and blue channels. This means that way of applying gamma adjustment can't be adapted for colors. However support for colors should be possible for the piecewise linear method, since in that case you can control the divider separately for each channel (since it's defined by the color fill on the layer). The opacity still stays the same, but I think it should be possible to make it work. I haven't done the math for that though.

Posted by | Categories: Life | Comments »

## OpenCT on Debian Stretch

24.02.2018 10:03

I don't like replacing old technology that isn't broken, although I wonder sometimes whether that's just rationalizing the fear of change. I'm still using a bunch of old Schlumberger Cryptoflex (e-Gate) USB tokens for securely storing client-side SSL certificates. All of them are held together by black electrical tape at this point, since the plastic became brittle with age. However they still serve their purpose reasonably well, even if software support for them has been obsoleted a long time ago. So what follows is another installment of the series on keeping these hardware tokens working on the latest Debian Stable release.

Stretch upgrades the pcscd and libpcsclite1 packages (from the pcsc-lite project) to version 1.8.20. Unfortunately, this upgrade breaks the old openct driver, which is to my knowledge the only way to use these tokens on a modern system. This manifests itself as the following error when dumping the list of currently connected smart cards:

$pkcs15-tool -D Using reader with a card: Axalto/Schlumberger/Gemalo egate token 00 00 PKCS#15 binding failed: Unsupported card  Some trial and error and git bisect led me to commit 8eb9ea1 which apparently caused this issue. It was committed between releases 1.8.13 (which was shipped in Jessie) and 1.8.14. This commit introduces some subtle changes in the way buffers of data are exchanged between pcscd and its drivers, which break openct 0.6.20. There are two ways around that: you can keep using pcscd and libpcsclite1 from Jessie (the 1.8.13 source package from Jessie builds fine on Stretch), or you can patch openct. I've decided on the second option. The openct driver is no longer developed upstream and has been removed from Debian in Jessie (last official release was in 2010, although there has been some effort to modernize it). I keep my own git repository and Debian packages based on the last package shipped in Wheezy. My patched version 0.6.20 includes changes required for systemd support, and now also the patch required to support modern pcscd version on Stretch. The latter has been helpfully pointed out to me by Ludovic Rousseau on the pcsc-lite mailing list. My openct packages for Stretch on amd64 can be found here (version 0.6.20-1.2tomaz2). The updated source is also in a git repository (with a layout compatible with git-buildpackage), should you want to built it yourself: $ git clone http://www.tablix.org/~avian/git/openct.git


Other smart card-related packages work for me as-shipped in Stretch (e.g. opensc and opensc-pkcs11 0.16.0-3). No changes were necessary in Firefox configuration for it to be able to pull client-side certificates from the hardware tokens. It is still required however, to insert the token only when no instances of Firefox are running.

Posted by | Categories: Code | Comments »

## Notes on the general-purpose clock on BCM2835

17.02.2018 20:30

Raspberry Pi boards, or more specifically the Broadcom system-on-chips they are based upon, have the capability to generate a wide range of stable clock signals entirely in hardware. These are called general-purpose clock peripherals (GPCLK) and the clock signals they generate can be routed to some of the GPIO pins as alternate pin functions. I was recently interested in this particular detail of Raspberry Pi and noticed that there is little publicly accessible information about this functionality. I had to distill a coherent picture from various, sometimes conflicting, sources of information floating around various websites and forums. So, in the hope that it will be useful for someone else, I'm posting my notes on Raspberry Pi GPCLKs with links for anyone that needs to dig deeper. My research was focused on Raspberry Pi Compute Module 3, but it should mostly apply to all Raspberry Pi boards.

Here is an example of a clock setup that uses two GPCLK peripherals to produce two clocks that drive components external to the BCM2835 system-on-chip (in my case a 12.228 MHz clock for an external audio interface and a 24 MHz clock for an USB hub). This diagram is often called the clock tree and is a common sight in datasheets. Unfortunately, the publicly-accessible BCM2835 datasheet omits it, so I drew my own on what information I could gather on the web. Only components relevant to clocking the GPCLKs are shown:

The root of the tree is an oscillator on the far left of the diagram. BCM2835 derives all other internal clocks from it by multiplying or dividing its frequency. On a Compute Module 3 the oscillator clock is a 19.2 MHz signal defined by the on-board crystal resonator. The oscillator frequency is fixed and cannot be changed in software.

The oscillator is routed to a number of phase-locked loop (PLL) devices. A PLL is a complex device that allows you to multiply the frequency of a clock by a configurable, rational factor. Practical PLLs necessarily add some amount of jitter into the clock. How much depends on their internal design and is largely independent of the multiplication factor. For BCM2835 some figures can be found in the Compute Module datasheet, under the section Electrical specification. You can see that routing the clock through a PLL increases the jitter by 28 ps.

Image by Raspberry Pi (Trading) Ltd.

The BCM2835 contains 5 independent PLLs - PLLA, PLLB, PLLC, PLLD and PLLH. The system uses most of these for their own purposes, such as clocking the ARM CPU, the VideoCore GPU, HDMI interface, etc. The PLLs have some default configuration that however cannot be strongly relied upon. Some default frequencies are listed on this page. Note that it says that PLLC settings depend on overclocking settings. My own experiments show that PLLH settings change between firmware versions and whether a monitor is attached to HDMI or not. On some Raspberry Pi boards, other PLLs are used to clock on-board peripherals like Ethernet or Wi-Fi - search for gp_clk in dt-blob.dts. On the Compute Module, PLLA appears to be turned-off by default and free for general-purpose use. This kernel commit suggests that PLLD settings are also stable.

For each PLL, the multiplication factors can be set independently in software using registers in I/O memory. To my knowledge these registers are not publicly documented, but the clk-bcm2835.c file in the Linux kernel offers some insight into what settings are available. The output of each PLL branches off into several channels. Each channel has a small integer divider that can be used to lower the frequency. It is best to leave the settings of the PLL and channel dividers to the firmware by using the vco@PLLA and chan@APER sections in dt-blob.bin. This is described in the Raspberry Pi documentation.

There are three available GPCLK peripherals: GPCLK0, GPCLK1 and GPCLK2. For each you can independently choose a source. 5 clock sources are available: oscillator clock and 4 channels from 4 PLLs (PLLB isn't selectable). Furthermore, each GPCLK peripheral has a independent fractional divider. This divider can again divide the frequency of the selected clock source by (almost) an arbitrary rational number.

Things are somewhat better documented at this stage. The GPCLK clock source and fractional divider are controlled from I/O memory registers that are described in the BCM2835 ARM peripherals document. Note that there is an error in the equation for the average output frequency in Table 6-32. It should be:

f_{GPCLK} = \frac{f_{source}}{\mathrm{DIVI} + \frac{\mathrm{DIVF}}{4096}}

It is perfectly possible to setup GPCLK by writing directly into registers from Linux user space, for example by mmaping /dev/mem or using a command-line tool like busybox devmem. This way you can hand-tune the integer and fractional parts of the dividers or use the noise-shaping functions. You might want to do this if jitter is important. When experimenting, if find the easiest way to get the register base address is to search the kernel log. In the following case, the CM_GP0CTL register would be at 0x3f201070:

# dmesg|grep gpiomem
gpiomem-bcm2835 3f200000.gpiomem: Initialised: Registers at 0x3f200000


A simpler way for taking care of GPCLK settings is again through the dt-blob.bin file. By using the clock@GPCLK0 directives under clock_routing and clock_setup, the necessary register values are calculated and set automatically at boot by the firmware. As far as I can see, these only allow using PLLA and APER. Attempting to set or use other PLL sources has unpredictable results in my experience. I also recommend checking the actual clock output with an oscilloscope, since these automatic settings might not be optimal.

The settings shown on the clock tree diagram on the top were obtained with the following part compiled into the dt-blob.bin:

clock_routing {
vco@PLLA {
freq = <1920000000>;
};
chan@APER {
div = <4>;
};
clock@GPCLK0 {
pll = "PLLA";
chan = "APER";
};
clock@GPCLK2 {
pll = "PLLA";
chan = "APER";
};
};

clock_setup {
clock@GPCLK0 {
freq = <24000000>;
};
clock@GPCLK2 {
freq = <12288000>;
};
};


If the clocks can be setup automatically, why is all this background important? Rational dividers used by GPCLKs work by switching the output between two frequencies. In contrast to PLLs the jitter they introduce depends largely on their settings and can be quite severe in some cases. For example, this is how the 12.228 MHz clock looked like when set by firmware from a first-attempt dt-blob.bin:

It's best to keep your clocks divided by integer ratios, because in that case the dividers introduce minimal jitter. If you can't do that, jitter is minimized by maximizing the input clock source frequency. In my case, I wanted to generate two clocks that didn't have an integer ratio, so I was forced to use the fractional part on at least one divider. I opted to have low jitter, integer divided clock for the 24 MHz USB clock and a higher jitter (but still well within acceptable range) for the 12.288 MHz audio clock.

This is how the 12.228 MHz clock looks like with the settings shown above. It's a significant improvement over the first attempt:

GPCLKs can be useful in lowering the cost of a system, since they can remove the need for separate expensive crystal resonators. However, it can sometimes be deceiving how easy they are to set up on Raspberry Pi. dt-blob.bin offers a convenient way of enabling clocks on boot without any assistance from Linux userland. Unfortunately the exact mechanism on how the firmware sets the registers is not public, hence it's worth double checking its work by understanding what is happening behind the scenes, inspecting the registers and checking the actual clock signal with an oscilloscope.

Posted by | Categories: Digital | Comments »

## FOSDEM 2018

08.02.2018 11:28

Gašper and I attended FOSDEM last weekend. It is an event about open source and free software development that happens each year in Brussels. I've known about it for some time, mostly because it was quite a popular destination among a group of Kiberpipa alumni a few years ago. I've never managed to join them. This year though I was determined to go. Partly because I couldn't get a ticket for 34C3 and was starting to miss this kind of a crowd, and partly because several topics on the schedule were relevant to one of my current projects.

FOSDEM turned out to be quite unlike any other event I've attended. I was surprised to hear that there are no tickets or registration. When we arrived at the event it was immediately apparent why that is the case. Basically, for two days the Université libre de Bruxelles simply surrenders their lecture rooms and hallways to software developers. Each room is dedicated to talks on a particular topic, such as software defined radio or Internet of things. Hallways are filled with booths, from companies offering jobs to volunteer projects gathering donations and selling t-shirts. I counted more than 50 tracks, distributed over 5 campus buildings. More than once I found myself lost in the unfamiliar passageways.

Opinions whether the event is too crowded or not seem to differ. I often found the halls unbearably packed and had to step out to get a breath of air, but I do have a low threshold for these kind of things. Mostly cold, wet and windy weather didn't help with crowds indoors either. The lecture rooms themselves had volunteers taking care they were not filled over capacity. This meant that once you got in a room, following the talks was a pleasant experience. However, most rooms had long queues in front. The only way to get in was to show up early in the morning and stay there. I didn't manage to get back into any smaller room after leaving for lunch, so I usually ended up in the biggest auditorium with the keynote talks.

Speaking about keynotes. I enjoyed Simon Phipps's and Italo Vignoli's talk about the past 20 years of open source. They gave a thorough overview of the topic with some predictions for the future. I found myself thinking whether open source movement really was such a phenomenal success. Indeed it is everywhere behind pervasive web services of today, however the freedom for users to run, study and improve software at their convenience is mostly missing these days where software is hidden behind an opaque web server on someone else's computer. Liam Proven in Alternate histories of computing explored how every generation reinvents solutions and mistakes of their predecessors, with a focus on Lisp machines. It seems that one defining property of computer industry is that implementing a thing from scratch is invariably favored over understanding and building upon existing work. I also recommend Steven Goodwin's talk about how hard it is to get behavior right in simplest things like smart light switches. He gave nice examples why a smart appliance that has not been well thought out will cause more grief than a dumb old one.

From the more technical talks I don't have many recommendations. I haven't managed to get into most of the rooms I had planned for. From hanging around the Internet of things discussions one general sentiment I captured was that MQTT has become a de facto standard for any Internet-connected sensor or actuator. ESP8266 remains as popular as ever for home-brew projects. Moritz Fischer gave a fascinating, but very dense intro into a multitasking scheduler for ARM Cortex microcontrollers that was apparently custom developed by Google for use on their Chromebook laptops. Even though I'm fairly familiar with ARM, he lost me after the first 10 minutes. However if I ever need to look into multitasking on ARM microcontrollers, his slides seem to be a very good reference.

I don't regret going to FOSDEM. It has been an interesting experience and the trip has been worth it just for the several ideas Gašper and I got there. I can't complain about the quality of the talks I've seen and although the whole thing seemed chaotic at times it must have been a gargantuan effort on the part of the volunteer team to organize. I will most likely not be going again next year though. I feel like this is more of an event for getting together with a team you've been closely involved with in developing a large open source project. Apart from drive-by patches, I've not collaborated in such projects for years now, so I often felt as an outsider that was more adding to the crowds than contributing to the discussion.

Posted by | Categories: Life | Comments »

## Tracking notebooks

27.01.2018 20:33

Two months ago, there was a post on Hacker News about keeping a lab notebook. If you like to keep written notes, the training slides from NIH might be interesting to flip through, even if you're not working in a scientific laboratory. In the ensuing discussion, user cgore mentioned that they recommend attaching a Bluetooth tracker on notebooks. In recent years I've developed a quite heavy note taking habit and these notebooks have become increasingly important to me. I carry one almost everywhere I go. I can be absentminded at times and had my share of nightmares about losing all my notes for the last 80 pages. So the idea of a Bluetooth tracker giving me a peace of mind seemed worth exploring further.

cgore mentioned the Tile tracker in their comment. The Tile Slim variant seemed to be most convenient for sticking to a cover of a notebook. After some research I found that the Slovenian company Chipolo was developing a similar credit card-sized device, but in November last year I couldn't find a way to buy one. The Wirecutter's review of trackers was quite useful. In the end, I got one Tile Slim for my notebook.

This is the tracker itself. The logo on the top side is a button and there is a piezo beeper on the other side. It is indeed surprisingly thin (about 2.5 mm). Of course if you cover it with a piece of paper (or ten), the bump it makes under the pen is still hard to ignore when writing over it. I mostly use school notebooks with soft covers, so the first challenge was where to put it on the notebook for it to be minimally annoying. I also wanted it to be removable so I could reuse the Tile on a new notebook.

What I found to work best for me is to put the Tile in a paper pocket. I glue the pocket to the back cover of the notebook, near the spine. I still feel the edges when writing, but it's not too terrible. So far I was making the pockets from ordinary office paper (printable pattern) and they seem to hold up fine for the time the notebook is in active use. They do show noticeable wear though, so maybe using heavier paper (or duct tape) wouldn't be a bad idea. The covers of notebooks I use are thick enough that I haven't noticed that I would be pressing the button on the tile with my pen.

There is no simple way to disassemble the Tile and the battery is not replaceable nor rechargeable. Supposedly it lasts a year and after that you can get a new Tile at a discount. Unless, that is, you live in a country where they don't offer that. Do-it-yourself battery replacement doesn't look like an option either, although I will probably take mine apart when it runs out. I don't particularly like the idea of encouraging more throw-away electronics

I found the Android app that comes with the tracker quite confusing at first. Half of the screen space is dedicated to "Tips" and it has some kind of a walk-through mode when you first use it. It was not clear to me what was part of the walk-through and what was for real. I ended up with the app in a state where it insisted that my Tile is lost and stubbornly tried to push their crowd-sourced search feature. Some web browsing later I found that I needed to restart my phone, and after that I didn't notice the app erroneously losing contact with the Tile again. In any case, hopefully I won't be opening the app too often.

Once I figured them out, the tracker and the app did seem to do their job reasonably well. For example, if I left the notebook at the office and went home the app would tell me the street address of the office where I left it. I simulated a lost notebook several times and it always correctly identified the last seen address, so that seems encouraging. I'm really only interested in this kind of rough tracking. I'm guessing the phone records the GPS location when it doesn't hear the Bluetooth beacons from the tracker anymore. The Tile also has a bunch of features I don't care about: you can make your phone ring by pressing the button on the Tile or you can make the Tile play a melody. There's also a RSSI ranging kind of thing in the app where it will tell you how far from the Tile you are in the room, but it's as useless as all other such attempts I've seen.

With apologies to xkcd

The battery drain on the phone after setting up the app is quite noticeable. I haven't done any thorough tests, but my Motorola Moto G5 Plus went from around 80% charge at the end of the day to around 50%. While before I could usually go two days without a charger, now the battery will be in the red after 24 hours. I'm not sure whether that is the app itself or the fact that now Bluetooth on the phone must be turned on all the times. Android's battery usage screen lists neither the Tile app nor Bluetooth (nor anything else for that matter) as causing significant battery drain.

Last but not least, carrying the Tile on you is quite bad for privacy. The Tile website simply brushes such concerns aside, saying we are all being constantly tracked anyway. But the fact is that using this makes (at least) one more entity aware of your every move. The app does call home with your location, since you can look up current Tile location on the web (although for me, it says I don't own any Tiles, so maybe the call-home feature doesn't actually work). It's another example of an application that would work perfectly fine without any access to the Internet at all but for some reason needs to talk to the Cloud. The Tiles are also trivially trackable by anyone with a Bluetooth dongle and I wonder what kind of authentication is necessary for triggering that battery-draining play-the-melody thing.

I will probably keep using this Tile until it runs out. It does seem to be useful, although I have not yet lost a notebook. I'm not certain I will get another though. The throw-away nature of it is off-putting and I don't like being a walking Bluetooth beacon. However it does make me wonder how much thinner you could make one without that useless button and beeper, and perhaps adding a wireless charger. And also if some kind of a rolling code could be used to prevent non-authorized people from tracking your stuff.

Posted by | Categories: Life | Comments »

## News from the Z80 land

05.01.2018 19:30

Here are some overdue news regarding modern development tools for the vintage Z80 architecture. I've been wanting to give this a bit of exposure, but alas other things in life interfered. I'm happy that there is still this much interest in old computers and software archaeology, even if I wasn't able to dedicate much of my time to it in the last months.

Back in 2008, Stefano Bodrato added support for Galaksija to Z88DK, a software development toolkit for Z80-based computers. Z88DK consists of a C compiler and a standard C library (with functions like printf and scanf) that interfaces with Galaksija's built-in terminal emulation routines. His work was based on my annotated ROM disassembly and development tools. The Z88DK distribution also includes a few cross-platform examples that can be compiled for Galaksija, provided they fit into the limited amount of RAM. When compiling for Galaksija, the C compiler produces a WAV audio file that can be directly loaded over an audio connection, emulating an audio cassette recording.

This November, Stefano improved support for the Galaksija compile target. The target now more fully supports the graphics functions from Z88DK library, including things like putsprite for Galaksija's 2x3 pseudographics. He also added a joystick emulation to Z88DK cross-platform joystick library. The library emulates two joysticks via the keyboard. The first one uses arrow keys on Galaksija's keyboard. The second one uses the 5-6-7-8 key combination in the top row that should be familiar to the users of ZX Spectrum. He also added a clock function that uses the accurate clock ticks provided by the ROM's video interrupt.

(Click to watch Z88DK clock example on Galaksija video)

This made it possible to build more games and demos for Galaksija. I've tried the two-player snakes game and the TV clock demo and (after some tweaking) they both worked on my Galaksija replica. To try them yourself, follow the Z88DK installation instructions and then compile the demos under examples/graphics by adapting the zcc command-lines at the top of the source files.

For instance, to compile clock.c into clock.wav I used the following:

zcc +gal -create-app -llib3d -o clock clock.c  The second piece of Z80-related news I wanted to share relates to the z80dasm 1.1.5 release I made back in August. z80dasm is a disassembler for the Z80 machine code. The latest release significantly improves handling of undocumented instructions that was reported to me by Ast Moore. I've already written in more detail about that in a previous blog post. Back in November I've also updated Debian packaging for z80dasm. So the new release is now also available as a Debian source or binary package that should cleanly build and install on Jessie, Stretch and Sid. Unfortunately the updated package has not yet been uploaded to the Debian archive. I have been unable to reach any of my previous sponsors (in the Debian project, a package upload must be sponsored by a Debian developer). If you're a Debian developer with an interest in vintage computing and would like to sponsor this upload, please contact me. Update: z80dasm 1.1.5-1 package is now in Debian Unstable. Posted by | Categories: Code | Comments » ## Luggage lock troubles 05.10.2017 18:32 Back in August an annoying thing happened while I was traveling in Germany. My luggage lock wouldn't open. I was pretty sure I had the right combination, but the damn thing would not let me get to my things. I tried some other numbers that came to mind and a few futile attempts at picking it. I then sat down and went through all 1000 combinations. It would still not open. At that point it was around 1 am, I had a long bus ride that day and desperately needed a shower and sleep. I ran out of patience, took the first hard object that came to hand and broke off the lock. Thankfully, that only took a few moments. This is the culprit, reassembled back to its original shape after the fact. It's a type of a 3-digit combination lock that locks the two zipper handles on the main compartment of the suitcase. It came off an "Easy Trip" suitcase of a Chinese origin. It's obviously just a slight inconvenience for anyone wanting to get in, but I didn't want to ruin my suitcase and dragging my clothes through a narrow crack made by separating the zipper with a ballpoint pen felt uncivilized. This is how inside of the lock looks like in the locked position. There are three discs with notches, one for each digit. Placed over them is a black plastic latch with teeth that align with the notches. In the locked position, the latch is pushed right by the discs against the horizontal spring in the middle right. In this position, the two protrusions on the top of the latch prevent the two metal bolts from rotating in. The bolts go through the loops in the zipper handles. They also connect to the button on the other side of the lock, so in this position the button can't be moved. When the right combination is entered, the teeth on the latch match the notches on the discs. The spring extends, snaps the latch left and moves it out of the way of the bolts. The button can be pressed and the bolts rotate to free the zipper. So, what went wrong with my lock? When I removed the lock from the suitcase, the spring that moves the latch was freely rattling inside the case. Without it, the latch will not move on its own, even when the right combination is set. I think this is the likely cause of my troubles. The spring does not have a proper seat on the fixed end, so it seems likely that a strong enough knock on the lock could displace it. I guess that is not beyond the usual abuse luggage experiences in air travel. After replacing the spring the lock works just fine. I'm not using it again though. Would it be possible to get my toiletries in a less destructive way? If I knew that the problem was the displaced spring, the solution would be simply to set the right combination, make sure the button is not pressed to remove friction on the latch and then rotate the suitcase until gravity moved the latch out of the way. Another possibility would be to pen the zipper and then unscrew the lock from the inside. Unscrewed lock is easily disassembled and removed from the zipper handles, but finding the screws and space for a screwdriver might be impractical with a full suitcase. Posted by | Categories: Life | Comments » ## Testing Galaksija's memory 26.09.2017 20:13 Before attempting to restore the damaged laminate of Mr Ivetić' Galaksija I wanted to have some more confidence that the major components are still in working order. The fact that the NAND gate in the character generator patch was still working correctly gave me hope that the board was not connected to a wrong power supply. That could do serious damage to the semiconductors. Still, I wanted to test some of the bigger integrated circuits. Memory chips are relatively straightforward to check. They are also mounted on sockets on this board, so they were easy to remove and test on a breadboard. This is another post in the series about restoration of an original Galaksija microcomputer. Galaksija is a small home microcomputer from former Yugoslavia that was built around the Z80 microprocessor. It used EPROMs, a predecessor to modern flash memory, to store its simple operating system. As it was common at the time, Galaksija could not update its system software by itself. In fact, western home computers typically stored such software in mask ROMs. There, code and data was programmed by physically etching a pattern into the metal layer of the chip. Even though Yugoslavia had semiconductor industry that was capable of making ROMs, producing a custom chip was not economical for Galaksija, which had relatively low production numbers. Galaksija originally came with two EPROMs. The first one, called ROM A in the manual and marked Master EPROM here, contains 4 kB of Z80 CPU machine code and data for basic operations. It includes specialized functions related to the hardware: video driver for generation of the video signal, keyboard read-out as well as modulation and demodulation routines for saving data to an audio cassette. Some higher-level functions are also included. There's a simplistic terminal emulation with a command-line interface, a stack-based floating point calculator and a BASIC interpreter based on the TRS-80. My incomplete Galaksija disassembly contains more details. The second EPROM is the character generator ROM. Galaksija's video output is designed fundamentally around text. The frame buffer contains only references to characters that are to be drawn on the screen. How these characters look, the actual pixels you see, are stored in the character ROM. This is similar to how text mode worked on old PCs and was done to limit the RAM use. In fact, a bitmapped image of the whole screen would not fit into the 2 kB of Galaksija's RAM. Of course, this means that only very limited graphics can be displayed. By sacrificing a lot of RAM and hacking the video driver, some limitations can be worked around. Galaksija uses static RAM for its working memory. Using costly static RAM was obsolete even in the early 1980s and is the main reason why Galaksija originally only had 2 kB of RAM (which could be upgraded to 6 kB by inserting up to two more identical 2 kB chips). The much larger dynamic RAM would require more complicated circuitry to interface with the CPU and was only added in the later Galaksija "Plus" upgrade. Interestingly, the Z80 CPU was originally meant to use dynamic RAM and includes functionality to perform the required refresh cycles. However in Galaksija this function was instead used for video generation. This board uses a rare EMS6116 RAM chip made by Iskra Semiconductors. After carefully removing all three memory chips from the board I wired them up to an Arduino Mega using a breadboard and a rat's nest of jumper wires. I used a slightly modified Oddbloke's RomReader sketch for dumping the EPROM contents. Since the 6116 RAM has an electrical interface that is very similar to 27-series EPROMs I also used this sketch as a base of my RAM test. The RAM test sketch first wrote a test pattern (bytes 00, FF, AA and 55) to all RAM addresses and then read it out to check for any bad bits. Sources for both Arduino sketches are available here. The first few runs of the EPROM dumper showed that ROM A didn't read out correctly. Its contents differed from what I had on record and consecutive reads yielded somewhat different results. After double-checking my setup however it turned out that my Arduino Mega board only puts out around 4.5 V on the +5 V supply. This is on the lower specified limit for these EPROMs, so it could explain occasional bad bits. After supplying a more stable voltage to the EPROM from a lab power supply, ROM A read correctly. Its contents were exactly the same as what I had on record (and what I use on my Galaksija replica). Similarly, the RAM and the other EPROM also checked out fine. However, in contrast to ROM A, the character ROM contents differed from what I had expected. After a closer look at the binary dump (using chargendump tool from my Galaksija tools to visualize its contents) it turned out that the difference is in characters 0 and 39 (ASCII hex codes 40 and 27 respectively). These two characters are used to draw the two halves of the logo that is displayed before the distinctive Galaksija READY command prompt. The character ROM I use on my replica contains an arrow-like logo of Elektronika inženjering. The ROM in this Galaksija's image contains the game-of-life glider logo of Mipro design. Both of these logos are etched into the copper on the solder side of the Galaksija circuit board: I don't know why there are two versions of the character ROM in existence and how old each of them is. As far as I know, both of these companies were involved in the manufacture of the original do-it-yourself kit parts (including the PCB and the keyboard). Wikipedia currently says that later factory-built computers were built by Elektronika inženjering, so it is possible that the arrow logo version is the more recent one. The blurry screenshot from the original Galaksija manual suggests that the glider logo was used when the screenshot was made. This seems to confirm that the glider logo is older. In any case, both versions of the ROM seem to already float around the web, so this discovery isn't terribly exciting. The Galaksija Emulator for instance comes with the glider logo version. As far as I can remember, I originally obtained my arrow logo ROM images from the Wikipedia page. The article used to contain hex dumps, but they were since then deleted due to copyright and non-encyclopedic content concerns. In conclusion, everything worked as expected, which is great news as far as the restoration of this Galaksija is concerned and a green light to proceed to fixing the PCB. It's also a testament to the reliability of old integrated circuits. I was pretty sure at least the EPROMs have discharged. The datasheet mentions that normal office fluorescent lighting will discharge an unprotected die in around 3 years. Considering that the chips were most likely programmed more than 30 years ago, it is surprising that the content lasted this long (the bit errors at low supply voltage I've seen might be the first sign of the deterioration though). It's also surprising that the Iskra EMS6116 survived and passed all the tests I could throw at it. Domestic chips did not have the best of reputations as far as reliability was concerned, but at least this specimen seemed to survive the test of time just fine. Posted by | Categories: Digital | Comments » ## On piping Curl to apt-key 21.08.2017 16:52 Piping Curl to Bash is dangerous. Hopefully, this is well known at this point and many projects no longer advertise this installation method. Several popular posts by security researchers demonstrated all kinds of covert ways such instructions could be subverted by an attacker who managed to gain access to the server hosting the install script. Passing HTTP responses uninspected to other sensitive system commands is no safer, however. This is how Docker documentation currently advertises adding their GPG key to the trusted keyring for Debian's APT: The APT keyring is used to authenticate packages that are installed by Debian's package manager. If an attacker can put their public key into this keyring they can for example provide compromised package updates to your system. Obviously you want to be extra sure no keys from questionable origins get added. Docker documentation correctly prompts you to verify that the added key matches the correct fingerprint (in contrast to some other projects). However, the suggested method is flawed since the fingerprint is only checked after an arbitrary number of keys has already been imported. Even more, the command shown specifically checks only the fingerprint of the Docker signing key, not of the key (or keys) that were imported. Consider the case where download.docker.com starts serving an evil key file that contains both the official Docker signing key and the attacker's own key. In this case, the instructions above will not reveal anything suspicious. The displayed fingerprint matches the one shown in the documentation:  curl -fsSL https://www.tablix.org/~avian/docker/evil_gpg | sudo apt-key add -
OK
$sudo apt-key fingerprint 0EBFCD88 pub 4096R/0EBFCD88 2017-02-22 Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid Docker Release (CE deb) <docker@docker.com> sub 4096R/F273FCD8 2017-02-22  However, importing our key file also silently added another key to the keyring. Considering that apt-key list outputs several screen-fulls of text on a typical system, this is not easy to spot after the fact without specifically looking for it: $ sudo apt-key list|grep -C1 evil
pub   1024R/6537017F 2017-08-21
uid                  Dr. Evil <evil@example.com>
sub   1024R/F2F8E843 2017-08-21


The solution is obviously not to pipe the key file directly into apt-key without inspecting it first. Interestingly, this is not straightforward. pgpdump tool is recommended by the first answer that comes up when asking Google about inspecting PGP key files. The version in Debian Jessie fails to find anything suspicious about our evil file:

$pgpdump -v pgpdump version 0.28, Copyright (C) 1998-2013 Kazu Yamamoto$ curl -so evil_gpg https://www.tablix.org/~avian/docker/evil_gpg
$pgpdump evil_gpg|grep -i evil  Debian developers suggest importing the key into a personal GnuPG keyring, inspecting the integrity and then exporting it into apt-key. That is more of a hassle, and not that useful for Docker's key that doesn't use the web of trust. In our case, inspecting the file with GnuPG directly is enough to show that it in fact contains two keys with different fingerprints: $ gpg --version|head -n1
gpg (GnuPG) 1.4.18
\$ gpg --with-fingerprint evil_gpg
pub  4096R/0EBFCD88 2017-02-22 Docker Release (CE deb) <docker@docker.com>
Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
sub  4096R/F273FCD8 2017-02-22
pub  1024R/6537017F 2017-08-21 Dr. Evil <evil@example.com>
Key fingerprint = 3097 2749 79E6 C35F A009  E25E 6CD6 1308 6537 017F
sub  1024R/F2F8E843 2017-08-21


The key file used in the example above was created by simply concatenating the two ASCII-armored public key blocks. It looks pretty suspicious in a text editor because it contains two ASCII headers (which is probably why pgpdump stops processing it before the end). However, two public key blocks could easily have also been exported into a single ASCII-armor block.

Posted by | Categories: Code | Comments »

## BeagleCore Module eMMC and SD card benchmarks

12.08.2017 13:03

In my experience, slow filesystem I/O is one of the biggest disadvantages of cheap ARM-based single-board computers. It contributes a lot to the general feeling of sluggishness when you work interactively with such systems. Of course, for many applications you might not care much about the filesystem after booting. It all depends on what you want to use the computer for. But it's very rare to see one of these small ARMs that would not be several times slower than a 10-year old Intel x86 box as far as I/O is concerned.

A while ago I did some benchmarking of the eMMC flash on the old Raspberry Pi Compute Module. I compared it with the SD card performance on Raspberry Pi Zero and the SATA drive on a CubieTruck. In the mean time, the project that brought the Computer Module on my desk back then has pivoted to a BeagleCore module. Since I now have a small working system with the BCM1 I thought I might do the same thing and compare its I/O performance with the other systems I tested earlier.

The BCM1 is a small Linux-running computer that comes in the form of a surface-mount hybrid module. It is built around a Texas Instruments AM335x Sitara system-on-chip with a single-core 1 GHz ARM Cortex-A8 CPU. The module comes with 512 MB RAM and 4 GB eMMC chip. It is supported by the software from the BeagleBoard ecosystem and in my case runs Debian Jessie with the 4.4.30-ti-r64 Linux kernel. Our board has a micro SD card socket, so I was also able to benchmark the SD card as well as the eMMC flash. I was using the Samsung EVO+ 32 GB card.

To perform the benchmark I used the same script on BCM1 as I used in my previous tests. I used hdparm and dd to estimate uncached and cached read and write throughputs. I ran each test 5 times and used the best result.

The write performance is better with the SD card than the eMMC flash on BCM1. SD card on BCM1 is also faster than the SD card on Raspberry Pi Zero, although this is probably not relevant. It's likely that the Zero performance was limited by the no-name SD card that came with it. eMMC on the BCM1 is slower than on Raspberry Pi CM.

The 16.2 MB/s result for the SD card here is somewhat suspect however. After several repeats, the first run of five was always the fastest, with the later runs only yielding around 12 MB/s. It is as if some caching was involved (even though fdatasync was specified with dd).

Interestingly, things turn around with read performance. BCM1's eMMC flash is better at reading data than the SD card. In fact, BCM1 eMMC flash reads faster than both Raspberry Pi setups I tested. It is still at least 3 times slower than a SATA drive on the CubieTruck.

Cached read performance is the least interesting of these tests. It's more or less the benchmark of the CPU memory access rather than anything related to the storage devices. Hence both BCM1 results are more or less identical. Interestingly, BCM1 with the 1 GHz CPU does not seem to be significantly better than the Compute Module with the 700 MHz CPU.

My results for the BCM1 eMMC flash are similar to those published here for the BeagleBone Black. This is expected, since BeagleBone Black has the same hardware as BCM1, and gives me some confidence that my results are at least somewhat correct.

Posted by | Categories: Digital | Comments »