Monitoring HomePlug AV devices

23.05.2018 18:51

Some time ago I wanted to monitor the performance of a network of Devolo dLAN devices. These are power-line Ethernet adapters. Each device looks like a power brick with a standard Ethernet RJ45 port. You can plug several of these into wall sockets around a building and theoretically they will together act as an Ethernet bridge, linking all their ports as if they were connected to a single network. The power-line network in question seemed to be having intermittent problems, but without some kind of a log it was hard to pin-point exactly what was the problem.

I have very little experience with power-line networks and some quick web searches yielded conflicting information about how these things work and what protocols are at work behind the curtains. Purely from the user perspective, the experience seems to be similar to wireless LANs. While individual devices have flashy numbers written on them, such as 500 Mbps, these are just theoretical "up to" throughputs. In practice, bandwidth of individual links in the network seems to be dynamically adjusted based on signal quality and is commonly quite a bit lower than advertised.

Devolo Cockpit screenshot.

Image by devolo.com

Devolo provides an application called Cockpit that allows you to configure the devices and visualize the power-line network. The feature I was most interested in was the real-time display of the physical layer bitrate for each individual link in the network. While the Cockpit is available for Linux, it is a user friendly point-and-click graphical application and chances were small that I would be able to integrate it into some kind of an automated monitoring process. The prospect of decoding the underlying protocol seemed easier. So I did a packet capture with Wireshark while the Cockpit was fetching the bitrate data:

HomePlug AV protocol decoded in Wireshark.

Wireshark immediately showed the captured packets as part of the HomePlug AV protocol and provided a nice decode. This finally gave me a good keyword I could base my further web searches on, which revealed a helpful white paper with some interesting background technical information. HomePlug AV physical layer apparently uses frequencies in the range of 2 - 28 MHz using OFDM with adaptive number of bits per modulation symbol. The network management is centralized, using a coordinator and a mix of CSMA/CA and TDMA access.

More importantly, the fact that Wireshark decode showed bitrate information in plain text gave me confidence that replicating the process of querying the network would be relatively straightforward. Note how the 113 Mbit/sec in the decode directly corresponds to hex 0x71 in raw packet contents. It appeared that only two packets were involved, a Network Info Request and a Network Info Confirm:

HomePlug AV Network Info Confirmation packet decode.

However before diving directly into writing code from scratch I came across the Faifa command-line tool on GitHub. The repository seems to be a source code dump from a now-defunct dev.open-plc.org web site. There is very little in terms of documentation or clues to its progeny. Last commit was in 2016. However a browse through its source code revealed that it is capable of sending the 0xa038 Network Info Request packet and receiving and decoding the corresponding 0xa039 Network Info Confirm reply. This was exactly what I was looking for.

Some tweaking and a compile later I was able to get the bitrate info from my terminal. Here I am querying one device in the power-line network (its Ethernet address is in the -a parameter). The queried device returns the current network coordinator and a list of stations it is currently connected to, together with the receive and transmit bitrates for each of those connections:

# faifa -i eth4 -a xx:xx:xx:xx:xx:xx -t a038
Faifa for HomePlug AV (GIT revision master-5563f5d)

Started receive thread
Frame: Network Info Request (Vendor-Specific) (0xA038)

Dump:
Frame: Network Info Confirm (Vendor-Specific) (A039), HomePlug-AV Version: 1.0
Network ID (NID): xx xx xx xx xx xx xx
Short Network ID (SNID): 0x05
STA TEI: 0x24
STA Role: Station
CCo MAC: 
	xx:xx:xx:xx:xx:xx
CCo TEI: 0xc2
Stations: 1
Station MAC       TEI  Bridge MAC        TX   RX  
----------------- ---- ----------------- ---- ----
xx:xx:xx:xx:xx:xx 0xc2 xx:xx:xx:xx:xx:xx 0x54 0x2b
Closing receive thread

The original tool had some annoying problems that I needed to work around before deploying it to my monitoring system. Most of all, it operated by sending the query with Ethernet broadcast address as the source. It then put the local network interface into promiscuous mode to listen for broadcasted replies. This seemed like bad practice and created problems for me, least of which was log spam with repeated kernel warnings about promiscuous mode enters and exits. It's possible that the use of broadcasts was a workaround for hardware limitation on some devices, but the devices I tested (dLAN 200 and dLAN 550) seem to reply just fine to queries from non-broadcast addresses.

I also fixed a race condition that was in the original tool due to the way it received the replies. If multiple queries were running on the same network simultaneously sometimes replies from different devices became confused. Finally, I fixed some rough corners regarding libpcap usage that prevented the multi-threaded Faifa process from exiting cleanly once a reply was received. I added a -t command-line option for sending and receiving a single packet.

As usual, the improved Faifa tool is available in my fork on GitHub:

$ git clone https://github.com/avian2/faifa.git

To conclude, here is an example of bitrate data I recorded using this approach. It shows transmit bitrates reported by one device in the network to two other devices (here numbered "station 1" and "station 2"). The data was recorded over the course of 9 days and the network was operating normally during this time:

Recorded PHY bitrate for two stations.

Even this graph shows some interesting things. Some devices (like the "station 1" here) seem to enter a power saving mode. Such devices don't appear in the network info reports, which is why data is missing for some periods of time. Even out of power saving mode, devices don't seem to update their reported bitrates if there is no data being transferred on that specific link. I think this is why the "station 2" here seems to have long periods where the reported bitrate remains constant.

Posted by Tomaž | Categories: Code | Comments »

On the output voltage of a real flyback converter

13.05.2018 13:18

I was recently investigating a small switch-mode power supply for a LED driver. When I was looking at the output voltage waveform on an oscilloscope it occurred to me that it looked very little like the typical waveforms that I remember from university lectures. So I thought it would be interesting to briefly explain here why it looks like that and where the different parts of the waveform come from.

The power supply I was looking at is based on the THX203H integrated circuit. It's powered from line voltage (230 V AC) and uses an isolated (off-line) flyback topology. The circuit is similar to the one shown for a typical application in the THX203H datasheet. The switching frequency is around 70 kHz. Below I modified the original schematic to remove the pi filter on the output which wasn't present in the circuit I was working with:

Schematic of a flyback converter based on THX203H.

When this power supply was loaded with its rated current of 1 A, this is how the output voltage on the output terminals looked like on an oscilloscope:

Output voltage of a flyback converter on an oscilloscope.

If you recall the introduction into switch mode voltage converters, they operate by charging a coil using the input voltage and discharging it into the output. A regulator keeps the duty cycle of the switch adjusted so that the mean inductor current is equal to the load current. For flyback converters, a typical figure you might recall for discontinuous operation is something like the following:

Idealized coil currents and output voltage in a flyback converter.

From top to bottom are the primary coil current, secondary coil current and output voltage, not drawn to scale on the vertical axis. First the ferrite core is charged by the increasing current in the primary coil. Then the controller turns off the primary current and the core discharges through the secondary coil into the output capacitor.

Note how the ripple on the oscilloscope looks more like the secondary current than the idealized output voltage waveform. The main realization here is that the ripple in this case is defined mostly by the output capacitor's equivalent series resistance (ESR) rather than it's capacitance. The effect of ESR is ignored in the idealized graphs above.

In this power supply, the 470 μF 25 V aluminum electrolytic capacitor on the output has an ESR somewhere around 0.1 Ω. The peak capacitor charging current is around 3 A and hence the maximum voltage drop on the ESR is around 300 mV. On the other hand, ripple due to capacitor charging and discharging is an order of a magnitude smaller, at around 30 mV peak-to-peak in this specific case.

Adding the ESR drop to the capacitor voltage gives a much better approximation of the observed output voltage. The break in the slope marks the point where the coil has stopped discharging. Before that point the slope is defined by the decaying current in the coil and the capacitor ESR. After that point, the slope is defined by the discharging output capacitor.

ESR voltage drop, capacitor voltage and output voltage in a flyback converter.

The only feature still missing is the high-frequency noise we see on the oscilloscope. This is caused by the switching done by the THX203H. Abrupt changes in the current cause the leakage magnetic flux in the transformer and the capacitances of the diodes to oscillate. Since the output filter has been removed as a cost-cutting measure, the ringing can be seen unattenuated on the output terminals. The smaller oscillations are caused by the primary switching on, while the larger oscillations that are obscuring the rising slope are caused by the THX203H switching off the primary coil.

Switching noise on the output voltage of a flyback converter.

Posted by Tomaž | Categories: Analog | Comments »

Switching window scaling in GNOME

01.05.2018 13:22

A while back I got a new work laptop: a 13" Dell XPS 9360. I was pleasantly surprised that installing the latest Debian Stretch with GNOME went smoothly and no special tweaks were needed to get everything up and running. The laptop works great and the battery life in Linux is a significant step up from my old HP EliteBook. The only real problem I noticed after a few months of use is weird behavior of the headphone jack, which often doesn't work for some unknown reason.

In any case, this is my first computer with a high-DPI screen. The 13-inch LCD panel has a resolution of 3200x1800, which means that text on a normal X window screen is almost unreadable without some kind of scaling. Thankfully, GNOME that ships with Stretch has a relatively good support for window scaling. You can set a scaling factor in the GNOME Tweak Tool and all windows will have their content scaled by an integer factor, making text and icons intelligible again.

Window scaling setting in GNOME Tweak Tool.

This setting works fine for me, at least for terminal windows and Firefox, which is what I mostly use on this computer. I've only noticed some minor cosmetic issues when I change this at run-time. Some icons and buttons in GNOME Shell (like the bar on the top of the screen or the settings menu on the upper-right) will sometimes look weird until the next reboot.

A bigger annoyance was the fact that I often use this computer with a normal (non-high-DPI) external monitor. I had to open up the Tweak Tool each time I connected or disconnected the monitor. Navigating huge scaled UI on the low-DPI external monitor or tiny UI on the high-DPI laptop panel got old really quick. It was soon obvious that changing that setting should be a matter of a single key press.

Finding a way to set window scaling programmatically was surprisingly difficult (not unlike my older effort in switching audio output device) I tried a few different approaches, like setting some dconf keys, but none worked reliably. I ended up digging into the Tweak Tool source. This revealed that the Tweak Tool is built around a nice Python library that exposes the necessary settings as functions you can call from your own scripts. The rest was simple.

I ended up with the following Python script:

#!/usr/bin/python2.7

from gtweak.utils import XSettingsOverrides

def main():
        xsettings = XSettingsOverrides()

        sf = xsettings.get_window_scaling_factor()

	if sf == 1:
		sf = 2
	else:
		sf = 1

	xsettings.set_window_scaling_factor(sf)

if __name__ == "__main__":
	main()

I have this script saved as toggle_hidpi and then a shortcut set in GNOME Keyboard Settings so that Super-F11 runs it. Note that using the laptop's built-in keyboard this means pressing the Windows logo, Fn, and F11 keys due to the weird modern practice of hiding normal function keys behind the Fn modifier. On an external USB keyboard, only Windows logo and F11 need to be pressed.

High DPI toggle shortcut in GNOME keyboard settings.

Posted by Tomaž | Categories: Code | Comments »

Faking adjustment layers with GIMP layer modes

12.03.2018 19:19

Drawing programs usually use a concept of layers. Each layer is like a glass plate you can draw on. The artwork appears in the program's window as if you were looking down through the stack of plates, with ink on upper layers obscuring that on lower layers.

Example layer stack in GIMP.

Adobe Photoshop extends this idea with adjustment layers. These do not add any content, but instead apply a color correction filter to lower layers. Adjustment layers work in real-time, the color correction gets applied seamlessly as you draw on layers below.

Support for adjustment layers in GIMP is a common question. GIMP does have a Color Levels tool for color correction. However, it can only be applied on one layer at a time. Color Levels operation is also destructive. If you apply it to a layer and want to continue drawing on it, you have to also adjust the colors of your brushes if you want to be consistent. This is often hard to do. It mostly means that you need to leave the color correction operation for the end, when no further changes will be made to the drawing layers and you can apply the correction to a flattened version of the image.

Adjustment layers are currently on the roadmap for GIMP 3.2. However, considering that Debian still ships with GIMP 2.8, this seems like a long way to go. Is it possible to have something like that today? I found some tips on the web on how to fake the adjustment layers using various layer modes. But these are very hand-wavy. Ideally what I would like to do is perfectly replicate the Color Levels dialog, not follow some vague instructions on what to do if I want the picture a bit lighter. That post did give me an idea though.

Layer modes allow you to perform some simple mathematical operations between pixel values on layers, like addition, multiplication and a handful of others. If you fill a layer with a constant color and set it to one of these layer modes, you are in fact transforming pixel values from layers below using a simple function with one constant parameter (the color on the layer). Color Levels operation similarly transforms pixel values by applying a function. So I wondered, would it be possible to combine constant color layers and layer modes in such a way as to approximate arbitrary settings in the color levels dialog?

Screenshot of the Color Levels dialog in GIMP.

If we look at the color levels dialog, there are three important settings: black point (b), white point (w) and gamma (g). These can be adjusted for red, green and blue channels individually, but since the operations are identical and independent, it suffices to focus a single channel. Also note that GIMP performs all operations in the range of 0 to 255. However, the calculations work out as if it was operating in the range from 0 to 1 (effectively a fixed point arithmetic is used with a scaling factor of 255). Since it's somewhat simpler to demonstrate, I'll use the range from 0 to 1.

Let's first ignore gamma (leave it at 1) and look at b and w. Mathematically, the function applied by the Color Levels operation is:

y = \frac{x - b}{w - b}

where x is the input pixel value and y is the output. On a graph it looks like this (you can also get this graph from GIMP with the Edit this Settings as Curves button):

Plot of the Color Levels function without gamma correction.

Here, the input pixel values are on the horizontal axis and output pixel values are on the vertical. This function can be trivially split into two nested functions:

y = f_2(f_1(x))
where
f_1(x) = x - b
f_2(x) = \frac{x}{w-b}

f1 shifts the origin by b and f2 increases the slope. This can be replicated using two layers on the top of the layers stack. GIMP documentation calls these masking layers:

  • Layer mode Subtract, filled with pixel value b,
  • Layer mode Divide, filled with pixel value w - b

Note that values outside of the range 0 - 1 are clipped. This happens on each layer, which makes things a bit tricky for more complicated layer stacks.

The above took care of the black and white point settings. What about gamma adjustment? This is a non-linear operation that gives more emphasis to darker or lighter colors. Mathematically, it's a power function with a real exponent. It is applied on top of the previous linear scaling.

y = x^g

GIMP allows for values of g between 0.1 and 10 in the Color Levels tool. Unfortunately, no layer mode includes an exponential function. However, the Soft light mode applies the following equation:

R = 1 - (1 - M)\cdot(1-x)
y = ((1-x)\cdot M + R)\cdot x

Here, M is the pixel value of the masking layer. If M is 0, this simplifies to:

y = x^2

So by stacking multiple such masking layers with Soft light mode, we can get any exponent that is a multiple of 2. This is still not really useful though. We want to be able to approximate any real gamma value. Luckily, the layer opacity setting opens up some more options. Layer opacity p (again in the range 0 - 1) basically does a linear combination of the original pixel value and the masked value. So taking this into account we get:

y = (1-p)x + px^2

By stacking multiple masking layers with opacity, we can get a polynomial function:

y = a_1 x + a_2 x^2 + x_3 x^3 + \dots

By carefully choosing the opacities of masking layers, we can manipulate the polynomial coefficients an. Polynomials of a sufficient degree can be a very good approximation for a power function with g > 1. For example, here is an approximation using the above method for g = 3.33, using 4 masking layers:

Polynomial approximation for the gamma adjustment function.

What about the g < 1 case? Unfortunately, polynomials don't give us a good approximation and there is no channel mode that involves square roots or any other usable function like that. However, we can apply the same principle of linear combination with opacity settings to multiple saturated divide operations. This effectively makes it possible to piecewise linearly approximate the exponential function. It's not as good as the polynomial, but with enough linear segments it can get very close. Here is one such approximation, for g = 0.33, again using 4 masking layers:

Piecewise linear approximation for the gamma adjustment function.

To test this all together in practice I've made a proof-of-concept GIMP plug-in that implements this idea for gray-scale images. You can get it on GitHub. Note that it needs a relatively recent Scipy and Numpy versions, so if it doesn't work at first, try upgrading them from PyPi. This is how the adjustment layers look like in the Layers window:

Fake color adjustment layer stack in GIMP.

Visually, results are reasonably close to what you get through the ordinary Color Layers tool, although not perfectly identical. I believe some of the discrepancy is caused by rounding errors. The following comparisons show a black-to-white linear gradient with a gamma correction applied. First stripe is the original gradient, second has the gamma correction applied using the built-in Color Levels tool and the third one has the same correction applied using the fake adjustment layer created by my plug-in:

Fake adjustment layer compared to Color Levels for gamma = 0.33.

Fake adjustment layer compared to Color Levels for gamma = 3.33.

How useful is this in practice? It certainly has the non-destructive, real-time property of the Photoshop's adjustment layer. The adjustment is immediately visible on any change in the drawing layers. Changing the adjustment settings, like the gamma value, is somewhat awkward, of course. You need to delete the layers created by the plug-in and create a new set (although an improved plug-in could assist in that). There is no live preview, like in the Color Levels dialog, but you can use Color Levels for preview and then copy-paste values into the plug-in. The multiple layers also clutter the layer list. Unfortunately it's impossible to put them into a layer group, since layer mode operations don't work on layers outside of a group.

My current code only works on gray-scale. The polynomial approximation uses layer opacity setting, and layer opacity can't be set separately for red, green and blue channels. This means that way of applying gamma adjustment can't be adapted for colors. However support for colors should be possible for the piecewise linear method, since in that case you can control the divider separately for each channel (since it's defined by the color fill on the layer). The opacity still stays the same, but I think it should be possible to make it work. I haven't done the math for that though.

Posted by Tomaž | Categories: Life | Comments »

OpenCT on Debian Stretch

24.02.2018 10:03

I don't like replacing old technology that isn't broken, although I wonder sometimes whether that's just rationalizing the fear of change. I'm still using a bunch of old Schlumberger Cryptoflex (e-Gate) USB tokens for securely storing client-side SSL certificates. All of them are held together by black electrical tape at this point, since the plastic became brittle with age. However they still serve their purpose reasonably well, even if software support for them has been obsoleted a long time ago. So what follows is another installment of the series on keeping these hardware tokens working on the latest Debian Stable release.

Stretch upgrades the pcscd and libpcsclite1 packages (from the pcsc-lite project) to version 1.8.20. Unfortunately, this upgrade breaks the old openct driver, which is to my knowledge the only way to use these tokens on a modern system. This manifests itself as the following error when dumping the list of currently connected smart cards:

$ pkcs15-tool -D
Using reader with a card: Axalto/Schlumberger/Gemalo egate token 00 00
PKCS#15 binding failed: Unsupported card

Some trial and error and git bisect led me to commit 8eb9ea1 which apparently caused this issue. It was committed between releases 1.8.13 (which was shipped in Jessie) and 1.8.14. This commit introduces some subtle changes in the way buffers of data are exchanged between pcscd and its drivers, which break openct 0.6.20.

There are two ways around that: you can keep using pcscd and libpcsclite1 from Jessie (the 1.8.13 source package from Jessie builds fine on Stretch), or you can patch openct. I've decided on the second option.

The openct driver is no longer developed upstream and has been removed from Debian in Jessie (last official release was in 2010, although there has been some effort to modernize it). I keep my own git repository and Debian packages based on the last package shipped in Wheezy. My patched version 0.6.20 includes changes required for systemd support, and now also the patch required to support modern pcscd version on Stretch. The latter has been helpfully pointed out to me by Ludovic Rousseau on the pcsc-lite mailing list.

My openct packages for Stretch on amd64 can be found here (version 0.6.20-1.2tomaz2). The updated source is also in a git repository (with a layout compatible with git-buildpackage), should you want to built it yourself:

$ git clone http://www.tablix.org/~avian/git/openct.git

Other smart card-related packages work for me as-shipped in Stretch (e.g. opensc and opensc-pkcs11 0.16.0-3). No changes were necessary in Firefox configuration for it to be able to pull client-side certificates from the hardware tokens. It is still required however, to insert the token only when no instances of Firefox are running.

Posted by Tomaž | Categories: Code | Comments »

Notes on the general-purpose clock on BCM2835

17.02.2018 20:30

Raspberry Pi boards, or more specifically the Broadcom system-on-chips they are based upon, have the capability to generate a wide range of stable clock signals entirely in hardware. These are called general-purpose clock peripherals (GPCLK) and the clock signals they generate can be routed to some of the GPIO pins as alternate pin functions. I was recently interested in this particular detail of Raspberry Pi and noticed that there is little publicly accessible information about this functionality. I had to distill a coherent picture from various, sometimes conflicting, sources of information floating around various websites and forums. So, in the hope that it will be useful for someone else, I'm posting my notes on Raspberry Pi GPCLKs with links for anyone that needs to dig deeper. My research was focused on Raspberry Pi Compute Module 3, but it should mostly apply to all Raspberry Pi boards.

Here is an example of a clock setup that uses two GPCLK peripherals to produce two clocks that drive components external to the BCM2835 system-on-chip (in my case a 12.228 MHz clock for an external audio interface and a 24 MHz clock for an USB hub). This diagram is often called the clock tree and is a common sight in datasheets. Unfortunately, the publicly-accessible BCM2835 datasheet omits it, so I drew my own on what information I could gather on the web. Only components relevant to clocking the GPCLKs are shown:

Rough sketch of a clock tree for BCM2835.

The root of the tree is an oscillator on the far left of the diagram. BCM2835 derives all other internal clocks from it by multiplying or dividing its frequency. On a Compute Module 3 the oscillator clock is a 19.2 MHz signal defined by the on-board crystal resonator. The oscillator frequency is fixed and cannot be changed in software.

19.2 MHz crystal resonator on a Compute Module 3.

The oscillator is routed to a number of phase-locked loop (PLL) devices. A PLL is a complex device that allows you to multiply the frequency of a clock by a configurable, rational factor. Practical PLLs necessarily add some amount of jitter into the clock. How much depends on their internal design and is largely independent of the multiplication factor. For BCM2835 some figures can be found in the Compute Module datasheet, under the section Electrical specification. You can see that routing the clock through a PLL increases the jitter by 28 ps.

GPCLK jitter characteristics from RPI CM datasheet.

Image by Raspberry Pi (Trading) Ltd.

The BCM2835 contains 5 independent PLLs - PLLA, PLLB, PLLC, PLLD and PLLH. The system uses most of these for their own purposes, such as clocking the ARM CPU, the VideoCore GPU, HDMI interface, etc. The PLLs have some default configuration that however cannot be strongly relied upon. Some default frequencies are listed on this page. Note that it says that PLLC settings depend on overclocking settings. My own experiments show that PLLH settings change between firmware versions and whether a monitor is attached to HDMI or not. On some Raspberry Pi boards, other PLLs are used to clock on-board peripherals like Ethernet or Wi-Fi - search for gp_clk in dt-blob.dts. On the Compute Module, PLLA appears to be turned-off by default and free for general-purpose use. This kernel commit suggests that PLLD settings are also stable.

For each PLL, the multiplication factors can be set independently in software using registers in I/O memory. To my knowledge these registers are not publicly documented, but the clk-bcm2835.c file in the Linux kernel offers some insight into what settings are available. The output of each PLL branches off into several channels. Each channel has a small integer divider that can be used to lower the frequency. It is best to leave the settings of the PLL and channel dividers to the firmware by using the vco@PLLA and chan@APER sections in dt-blob.bin. This is described in the Raspberry Pi documentation.

There are three available GPCLK peripherals: GPCLK0, GPCLK1 and GPCLK2. For each you can independently choose a source. 5 clock sources are available: oscillator clock and 4 channels from 4 PLLs (PLLB isn't selectable). Furthermore, each GPCLK peripheral has a independent fractional divider. This divider can again divide the frequency of the selected clock source by (almost) an arbitrary rational number.

Things are somewhat better documented at this stage. The GPCLK clock source and fractional divider are controlled from I/O memory registers that are described in the BCM2835 ARM peripherals document. Note that there is an error in the equation for the average output frequency in Table 6-32. It should be:

f_{GPCLK} = \frac{f_{source}}{\mathrm{DIVI} + \frac{\mathrm{DIVF}}{4096}}

It is perfectly possible to setup GPCLK by writing directly into registers from Linux user space, for example by mmaping /dev/mem or using a command-line tool like busybox devmem. This way you can hand-tune the integer and fractional parts of the dividers or use the noise-shaping functions. You might want to do this if jitter is important. When experimenting, if find the easiest way to get the register base address is to search the kernel log. In the following case, the CM_GP0CTL register would be at 0x3f201070:

# dmesg|grep gpiomem
gpiomem-bcm2835 3f200000.gpiomem: Initialised: Registers at 0x3f200000

A simpler way for taking care of GPCLK settings is again through the dt-blob.bin file. By using the clock@GPCLK0 directives under clock_routing and clock_setup, the necessary register values are calculated and set automatically at boot by the firmware. As far as I can see, these only allow using PLLA and APER. Attempting to set or use other PLL sources has unpredictable results in my experience. I also recommend checking the actual clock output with an oscilloscope, since these automatic settings might not be optimal.

The settings shown on the clock tree diagram on the top were obtained with the following part compiled into the dt-blob.bin:

clock_routing {
	vco@PLLA {
		freq = <1920000000>;
	};
	chan@APER {
		div = <4>;
	};
	clock@GPCLK0 {
		pll = "PLLA";
		chan = "APER";
	};
	clock@GPCLK2 {
		pll = "PLLA";
		chan = "APER";
	};
};

clock_setup {
	clock@GPCLK0 {
		freq = <24000000>;
	};
	clock@GPCLK2 {
		freq = <12288000>;
	};
};

If the clocks can be setup automatically, why is all this background important? Rational dividers used by GPCLKs work by switching the output between two frequencies. In contrast to PLLs the jitter they introduce depends largely on their settings and can be quite severe in some cases. For example, this is how the 12.228 MHz clock looked like when set by firmware from a first-attempt dt-blob.bin:

Clock signal with a lot of jitter produced by GPCLK function.

It's best to keep your clocks divided by integer ratios, because in that case the dividers introduce minimal jitter. If you can't do that, jitter is minimized by maximizing the input clock source frequency. In my case, I wanted to generate two clocks that didn't have an integer ratio, so I was forced to use the fractional part on at least one divider. I opted to have low jitter, integer divided clock for the 24 MHz USB clock and a higher jitter (but still well within acceptable range) for the 12.288 MHz audio clock.

This is how the 12.228 MHz clock looks like with the settings shown above. It's a significant improvement over the first attempt:

Low-jitter clock signal produced by GPCLK function.

GPCLKs can be useful in lowering the cost of a system, since they can remove the need for separate expensive crystal resonators. However, it can sometimes be deceiving how easy they are to set up on Raspberry Pi. dt-blob.bin offers a convenient way of enabling clocks on boot without any assistance from Linux userland. Unfortunately the exact mechanism on how the firmware sets the registers is not public, hence it's worth double checking its work by understanding what is happening behind the scenes, inspecting the registers and checking the actual clock signal with an oscilloscope.

Posted by Tomaž | Categories: Digital | Comments »

FOSDEM 2018

08.02.2018 11:28

Gašper and I attended FOSDEM last weekend. It is an event about open source and free software development that happens each year in Brussels. I've known about it for some time, mostly because it was quite a popular destination among a group of Kiberpipa alumni a few years ago. I've never managed to join them. This year though I was determined to go. Partly because I couldn't get a ticket for 34C3 and was starting to miss this kind of a crowd, and partly because several topics on the schedule were relevant to one of my current projects.

Marietje Schaake on Next Generation Internet at FOSDEM 2018

FOSDEM turned out to be quite unlike any other event I've attended. I was surprised to hear that there are no tickets or registration. When we arrived at the event it was immediately apparent why that is the case. Basically, for two days the Université libre de Bruxelles simply surrenders their lecture rooms and hallways to software developers. Each room is dedicated to talks on a particular topic, such as software defined radio or Internet of things. Hallways are filled with booths, from companies offering jobs to volunteer projects gathering donations and selling t-shirts. I counted more than 50 tracks, distributed over 5 campus buildings. More than once I found myself lost in the unfamiliar passageways.

Opinions whether the event is too crowded or not seem to differ. I often found the halls unbearably packed and had to step out to get a breath of air, but I do have a low threshold for these kind of things. Mostly cold, wet and windy weather didn't help with crowds indoors either. The lecture rooms themselves had volunteers taking care they were not filled over capacity. This meant that once you got in a room, following the talks was a pleasant experience. However, most rooms had long queues in front. The only way to get in was to show up early in the morning and stay there. I didn't manage to get back into any smaller room after leaving for lunch, so I usually ended up in the biggest auditorium with the keynote talks.

Queues at the food court at FOSDEM 2018.

Speaking about keynotes. I enjoyed Simon Phipps's and Italo Vignoli's talk about the past 20 years of open source. They gave a thorough overview of the topic with some predictions for the future. I found myself thinking whether open source movement really was such a phenomenal success. Indeed it is everywhere behind pervasive web services of today, however the freedom for users to run, study and improve software at their convenience is mostly missing these days where software is hidden behind an opaque web server on someone else's computer. Liam Proven in Alternate histories of computing explored how every generation reinvents solutions and mistakes of their predecessors, with a focus on Lisp machines. It seems that one defining property of computer industry is that implementing a thing from scratch is invariably favored over understanding and building upon existing work. I also recommend Steven Goodwin's talk about how hard it is to get behavior right in simplest things like smart light switches. He gave nice examples why a smart appliance that has not been well thought out will cause more grief than a dumb old one.

From the more technical talks I don't have many recommendations. I haven't managed to get into most of the rooms I had planned for. From hanging around the Internet of things discussions one general sentiment I captured was that MQTT has become a de facto standard for any Internet-connected sensor or actuator. ESP8266 remains as popular as ever for home-brew projects. Moritz Fischer gave a fascinating, but very dense intro into a multitasking scheduler for ARM Cortex microcontrollers that was apparently custom developed by Google for use on their Chromebook laptops. Even though I'm fairly familiar with ARM, he lost me after the first 10 minutes. However if I ever need to look into multitasking on ARM microcontrollers, his slides seem to be a very good reference.

One of the rare moments of dry weather at FOSDEM 2018

I don't regret going to FOSDEM. It has been an interesting experience and the trip has been worth it just for the several ideas Gašper and I got there. I can't complain about the quality of the talks I've seen and although the whole thing seemed chaotic at times it must have been a gargantuan effort on the part of the volunteer team to organize. I will most likely not be going again next year though. I feel like this is more of an event for getting together with a team you've been closely involved with in developing a large open source project. Apart from drive-by patches, I've not collaborated in such projects for years now, so I often felt as an outsider that was more adding to the crowds than contributing to the discussion.

Posted by Tomaž | Categories: Life | Comments »

Tracking notebooks

27.01.2018 20:33

Two months ago, there was a post on Hacker News about keeping a lab notebook. If you like to keep written notes, the training slides from NIH might be interesting to flip through, even if you're not working in a scientific laboratory. In the ensuing discussion, user cgore mentioned that they recommend attaching a Bluetooth tracker on notebooks. In recent years I've developed a quite heavy note taking habit and these notebooks have become increasingly important to me. I carry one almost everywhere I go. I can be absentminded at times and had my share of nightmares about losing all my notes for the last 80 pages. So the idea of a Bluetooth tracker giving me a peace of mind seemed worth exploring further.

cgore mentioned the Tile tracker in their comment. The Tile Slim variant seemed to be most convenient for sticking to a cover of a notebook. After some research I found that the Slovenian company Chipolo was developing a similar credit card-sized device, but in November last year I couldn't find a way to buy one. The Wirecutter's review of trackers was quite useful. In the end, I got one Tile Slim for my notebook.

Tile Slim Bluetooth tracker.

This is the tracker itself. The logo on the top side is a button and there is a piezo beeper on the other side. It is indeed surprisingly thin (about 2.5 mm). Of course if you cover it with a piece of paper (or ten), the bump it makes under the pen is still hard to ignore when writing over it. I mostly use school notebooks with soft covers, so the first challenge was where to put it on the notebook for it to be minimally annoying. I also wanted it to be removable so I could reuse the Tile on a new notebook.

Tile Slim in a paper envelope on a notebook cover.

What I found to work best for me is to put the Tile in a paper pocket. I glue the pocket to the back cover of the notebook, near the spine. I still feel the edges when writing, but it's not too terrible. So far I was making the pockets from ordinary office paper (printable pattern) and they seem to hold up fine for the time the notebook is in active use. They do show noticeable wear though, so maybe using heavier paper (or duct tape) wouldn't be a bad idea. The covers of notebooks I use are thick enough that I haven't noticed that I would be pressing the button on the tile with my pen.

There is no simple way to disassemble the Tile and the battery is not replaceable nor rechargeable. Supposedly it lasts a year and after that you can get a new Tile at a discount. Unless, that is, you live in a country where they don't offer that. Do-it-yourself battery replacement doesn't look like an option either, although I will probably take mine apart when it runs out. I don't particularly like the idea of encouraging more throw-away electronics

Screenshots of the Tile app on Android.

I found the Android app that comes with the tracker quite confusing at first. Half of the screen space is dedicated to "Tips" and it has some kind of a walk-through mode when you first use it. It was not clear to me what was part of the walk-through and what was for real. I ended up with the app in a state where it insisted that my Tile is lost and stubbornly tried to push their crowd-sourced search feature. Some web browsing later I found that I needed to restart my phone, and after that I didn't notice the app erroneously losing contact with the Tile again. In any case, hopefully I won't be opening the app too often.

Once I figured them out, the tracker and the app did seem to do their job reasonably well. For example, if I left the notebook at the office and went home the app would tell me the street address of the office where I left it. I simulated a lost notebook several times and it always correctly identified the last seen address, so that seems encouraging. I'm really only interested in this kind of rough tracking. I'm guessing the phone records the GPS location when it doesn't hear the Bluetooth beacons from the tracker anymore. The Tile also has a bunch of features I don't care about: you can make your phone ring by pressing the button on the Tile or you can make the Tile play a melody. There's also a RSSI ranging kind of thing in the app where it will tell you how far from the Tile you are in the room, but it's as useless as all other such attempts I've seen.

I have lots of problems in a notebook.

With apologies to xkcd

The battery drain on the phone after setting up the app is quite noticeable. I haven't done any thorough tests, but my Motorola Moto G5 Plus went from around 80% charge at the end of the day to around 50%. While before I could usually go two days without a charger, now the battery will be in the red after 24 hours. I'm not sure whether that is the app itself or the fact that now Bluetooth on the phone must be turned on all the times. Android's battery usage screen lists neither the Tile app nor Bluetooth (nor anything else for that matter) as causing significant battery drain.

Last but not least, carrying the Tile on you is quite bad for privacy. The Tile website simply brushes such concerns aside, saying we are all being constantly tracked anyway. But the fact is that using this makes (at least) one more entity aware of your every move. The app does call home with your location, since you can look up current Tile location on the web (although for me, it says I don't own any Tiles, so maybe the call-home feature doesn't actually work). It's another example of an application that would work perfectly fine without any access to the Internet at all but for some reason needs to talk to the Cloud. The Tiles are also trivially trackable by anyone with a Bluetooth dongle and I wonder what kind of authentication is necessary for triggering that battery-draining play-the-melody thing.

I will probably keep using this Tile until it runs out. It does seem to be useful, although I have not yet lost a notebook. I'm not certain I will get another though. The throw-away nature of it is off-putting and I don't like being a walking Bluetooth beacon. However it does make me wonder how much thinner you could make one without that useless button and beeper, and perhaps adding a wireless charger. And also if some kind of a rolling code could be used to prevent non-authorized people from tracking your stuff.

Posted by Tomaž | Categories: Life | Comments »

News from the Z80 land

05.01.2018 19:30

Here are some overdue news regarding modern development tools for the vintage Z80 architecture. I've been wanting to give this a bit of exposure, but alas other things in life interfered. I'm happy that there is still this much interest in old computers and software archaeology, even if I wasn't able to dedicate much of my time to it in the last months.


Back in 2008, Stefano Bodrato added support for Galaksija to Z88DK, a software development toolkit for Z80-based computers. Z88DK consists of a C compiler and a standard C library (with functions like printf and scanf) that interfaces with Galaksija's built-in terminal emulation routines. His work was based on my annotated ROM disassembly and development tools. The Z88DK distribution also includes a few cross-platform examples that can be compiled for Galaksija, provided they fit into the limited amount of RAM. When compiling for Galaksija, the C compiler produces a WAV audio file that can be directly loaded over an audio connection, emulating an audio cassette recording.

This November, Stefano improved support for the Galaksija compile target. The target now more fully supports the graphics functions from Z88DK library, including things like putsprite for Galaksija's 2x3 pseudographics. He also added a joystick emulation to Z88DK cross-platform joystick library. The library emulates two joysticks via the keyboard. The first one uses arrow keys on Galaksija's keyboard. The second one uses the 5-6-7-8 key combination in the top row that should be familiar to the users of ZX Spectrum. He also added a clock function that uses the accurate clock ticks provided by the ROM's video interrupt.

Z88DK clock example on Galaksija

(Click to watch Z88DK clock example on Galaksija video)

This made it possible to build more games and demos for Galaksija. I've tried the two-player snakes game and the TV clock demo and (after some tweaking) they both worked on my Galaksija replica. To try them yourself, follow the Z88DK installation instructions and then compile the demos under examples/graphics by adapting the zcc command-lines at the top of the source files.

For instance, to compile clock.c into clock.wav I used the following:

$ zcc +gal -create-app -llib3d -o clock clock.c

The second piece of Z80-related news I wanted to share relates to the z80dasm 1.1.5 release I made back in August. z80dasm is a disassembler for the Z80 machine code. The latest release significantly improves handling of undocumented instructions that was reported to me by Ast Moore. I've already written in more detail about that in a previous blog post.

Back in November I've also updated Debian packaging for z80dasm. So the new release is now also available as a Debian source or binary package that should cleanly build and install on Jessie, Stretch and Sid. Unfortunately the updated package has not yet been uploaded to the Debian archive. I have been unable to reach any of my previous sponsors (in the Debian project, a package upload must be sponsored by a Debian developer). If you're a Debian developer with an interest in vintage computing and would like to sponsor this upload, please contact me.

Update: z80dasm 1.1.5-1 package is now in Debian Unstable.

Posted by Tomaž | Categories: Code | Comments »

Luggage lock troubles

05.10.2017 18:32

Back in August an annoying thing happened while I was traveling in Germany. My luggage lock wouldn't open. I was pretty sure I had the right combination, but the damn thing would not let me get to my things. I tried some other numbers that came to mind and a few futile attempts at picking it. I then sat down and went through all 1000 combinations. It would still not open. At that point it was around 1 am, I had a long bus ride that day and desperately needed a shower and sleep. I ran out of patience, took the first hard object that came to hand and broke off the lock. Thankfully, that only took a few moments.

Broken luggage combination lock.

This is the culprit, reassembled back to its original shape after the fact. It's a type of a 3-digit combination lock that locks the two zipper handles on the main compartment of the suitcase. It came off an "Easy Trip" suitcase of a Chinese origin. It's obviously just a slight inconvenience for anyone wanting to get in, but I didn't want to ruin my suitcase and dragging my clothes through a narrow crack made by separating the zipper with a ballpoint pen felt uncivilized.

Inside luggage combination lock in locked position.

This is how inside of the lock looks like in the locked position. There are three discs with notches, one for each digit. Placed over them is a black plastic latch with teeth that align with the notches. In the locked position, the latch is pushed right by the discs against the horizontal spring in the middle right. In this position, the two protrusions on the top of the latch prevent the two metal bolts from rotating in. The bolts go through the loops in the zipper handles. They also connect to the button on the other side of the lock, so in this position the button can't be moved.

Inside luggage combination lock in unlocked position.

When the right combination is entered, the teeth on the latch match the notches on the discs. The spring extends, snaps the latch left and moves it out of the way of the bolts. The button can be pressed and the bolts rotate to free the zipper.

So, what went wrong with my lock? When I removed the lock from the suitcase, the spring that moves the latch was freely rattling inside the case. Without it, the latch will not move on its own, even when the right combination is set. I think this is the likely cause of my troubles. The spring does not have a proper seat on the fixed end, so it seems likely that a strong enough knock on the lock could displace it. I guess that is not beyond the usual abuse luggage experiences in air travel. After replacing the spring the lock works just fine. I'm not using it again though.

Would it be possible to get my toiletries in a less destructive way? If I knew that the problem was the displaced spring, the solution would be simply to set the right combination, make sure the button is not pressed to remove friction on the latch and then rotate the suitcase until gravity moved the latch out of the way. Another possibility would be to pen the zipper and then unscrew the lock from the inside. Unscrewed lock is easily disassembled and removed from the zipper handles, but finding the screws and space for a screwdriver might be impractical with a full suitcase.

Posted by Tomaž | Categories: Life | Comments »