A thousand pages of bullet journal

12.08.2016 17:27

A few weeks ago I filled the 1000th page in my Bullet Journal. Actually, I don't think I can call it that. It's not in fact that much oriented around bullet points. It's just a series of notebooks with consistently numbered, dated and titled pages for easy referencing, monthly indexes and easy-to-see square bullet points for denoting tasks. Most of the things I said two years ago still hold, so I'll try not to repeat myself too much here.

A thousand pages worth of notebooks.

Almost everything I do these days goes into this journal. Lab notes, sketches of ideas, random thoughts, doodles of happy ponies, to-do lists, pages worth of crossed-out mathematical derivations, interesting quotes, meeting notes and so on. Writing things down in a notebook often significantly clears them up. Once I have a concise and articulated definition of a problem, the solution usually isn't far away. Pen and paper helps me keep focus at talks and meetings, much like an open laptop does the opposite.

Going back through past notes gives a good perspective on how much new ideas depend on the context and mindset that created them. An idea for some random side-project that seems interesting and fun at first invariably looks much less shiny and worthy of attention after reading through the written version a few days or weeks later. I can't decide though whether it's better to leave such a thing on paper or hack together some half-baked prototype before the initial enthusiasm fades away.

The number of pages I write per month appears to be increasing. That might be because I settled on using cheap school notebooks. I find that I'm much more relaxed scribbling into a 50 cent notebook than ruining a 20€ Moleskine. Leaving lots of whitespace is wasteful, but helps a lot with readability and later corrections. Yes, whipping out a colorful children's notebook at an important meeting doesn't look very professional. Then again, most people at such meetings are too busy typing emails into their laptops to notice.

Number of pages written per month.

As much as it might look like a waste of time, I grew to like the monthly ritual of making an index page. I like the sense of achievement it gives me when I look back at what I've accomplished the previous month. It's also an opportunity for reflection. If the index gets hard to put all on one page, that's a good sign that the previous month was all too fragmented and that too many things wanted to happen at once.

The physical nature of the journal means that I can't carry the whole history with me at all times. That is also sometimes a problem. It is an unfortunate feature of my line of work that it is not uncommon for people to want to have unannounced meetings about a topic that was last discussed half a year ago. On the other hand, saying that I don't have my notes on me at that moment does present an honest excuse.

Indexes help, but finding things can be problematic. Then again, digital content (that's not publicly on the web) often isn't much better. I commonly find myself frustratingly searching for some piece of code or a document I know exists somewhere on my hard disk but can't remember any exact keyword that would help me find it. I considered making a digital version of monthly indexes at one point. I don't think it would be worth the effort and it would destroy some of the off-line quality of the notebook.

As I mentioned previously, gratuitous cross-referencing between notebook pages, IPython notebooks and other things does help. I tend not to copy tasks between pages, like in the original Bullet Journal idea. For projects that are primarily electronics related though, I'm used to keeping a separate folder with all the calculations and schematics, a habit I picked up long ago. There are not many such projects these days, but I did on one occasion photocopy pages from the notebook. I admit that made me feel absolutely archaic.

Posted by Tomaž | Categories: Life | Comments »

Oxidized diodes

08.08.2016 20:41

Out of curiosity, I salvaged these three LEDs from an old bicycle light that stopped working. Rain must have gotten inside it at one point, because the copper on the PCB was pretty much eaten away by the electrolysis. The chip that blinked the light was dead, but LEDs themselves still work.

Three LEDs damaged by water compared to a new one.

It's interesting how far inside the bulb the steel leads managed to oxidize. You can see that the right hand lead (anode) on all LEDs is brown all the way to the top, where a bond wire connects it to the die. I would have thought that the epoxy makes a better seal. For comparison, there's a similar new LED on the right. It's also interesting that the positive terminal was more damaged than the negative. On the right-most LED the terminal was actually eaten completely through just outside the bulb (although poking the remains with a needle still lights up the LED, so obviously the die itself was not damaged).

Posted by Tomaž | Categories: Life | Comments »

Newsletters and other beasts

23.07.2016 10:23

It used to be that whenever someone wrote something particularly witty on Slashdot, it was followed by a reply along the lines of "I find your ideas intriguing and would like to subscribe to your newsletter". Ten-odd years later, it seems like the web took that old Simpsons joke seriously. These days you can hardly follow a link on Hacker News without having a pop-up thrown in your face. Most articles now end with a plea for an e-mail address, and I've even been to real-life talks where the speakers continuously advertised their newsletter to the audience.

Recently I've been asked several times why I didn't support subscriptions by e-mail, like every other normal website. The short answer is that I keep this blog in a state that I wish other websites I visit would adopt. This means no annoying advertisements, respecting your privacy by not loading third-party Javascript or tracking cookies, HTTPS and IPv6 support, valid XHTML... and good support for the Atom standard. Following the death of Google Reader, the world turned against RSS and Atom feeds. However, I still find them vastly more usable than any alternative. It annoys me that I can't follow interesting people and projects on modern sites like Medium and Hackaday.io through this channel.

Twitter printer at 32C3

That said, you now can subscribe by e-mail to my blog, should you wish to do so (see also sidebar top-right). The thing that finally convinced me to implement this was hearing that some of you use RSS-to-email services that add their own advertisements to my blog posts. I did not make this decision lightly though. I used to host mailing lists and know what an incredible time sink they can be, fighting spam, addressing complaints and so on. I don't have that kind of time anymore, so using an external mass-mailing service was the only option. Running my own mail server in this era is lunacy enough.

Mailchimp seems to be friendly, so I'm using that at the moment. If turns to the dark side and depending how popular the newsletter gets, I might move to some other service - or remove it altogether. For the time being I consider this an experiment. It's also worth mentioning that while there are no ads, Mailchimp does add mandatory tracking features (link redirects and tracking pixels). Of course, it also collects your e-mail address somewhere.

Since I'm on the topic of subscriptions and I don't like writing meta posts like this, I would also like to mention here two ways of following my posts that are not particularly well known: if you are only interested in one particular topic I write about, you can search for it. Search results page has an attached Atom feed you can follow that only contains posts related to the query. If you on the other hand believe that Twitter is the new RSS, feel free to follow @aviansblog (at least until Twitter breaks the API again).

Posted by Tomaž | Categories: Life | Comments »

Visualizing frequency allocations in Slovenia

15.07.2016 17:34

If you go to a talk about dynamic spectrum access, cognitive radio or any other topic remotely connected with the way radio spectrum is used or regulated, chances are one of the slides in the introduction will contain the following chart. The multitude of little colorful boxes is supposed to impress on the audience that the spectrum is overcrowded with existing allocations and that any future technology will have problems finding vacant frequencies.

I admit I've used it myself in that capacity a couple of times. "The only space left unallocated is beyond the top-left and bottom-right edges, below 9 kHz and above 300 GHz", I would say, "and those frequencies are not very useful for new developments.". After that I would feel free to advertise the latest crazy idea that will magically create more space, brushing away the fact that spectrum seems to be like IPv4 address space - when there's real need, powers that be always seem to find more of it.

United States frequency allocations, October 2003.

Image by U.S. Department of Commerce

I was soon getting a bit annoyed by this chart. When you study it you realize it's showing the wrong thing. The crowdiness of it does fit with the general story people are trying to tell, but the ITU categories shown are not the problematic part of the 100 year legacy of radio spectrum regulations. I only came to realize that later though. My first thought was "Why are people discussing future spectrum in Europe, using a ten year old chart from U.S. Department of Commerce showing the situation on the other side of the Atlantic?"

Although there is a handful of similar charts for other countries on the web, I couldn't find one that I would be happy with. So two years back, with a bit of free time and encouraged by the local Open Data group, I set off to make my own. It would show the right thing, be up to date and describe the situation in my home country. I downloaded public PDFs with the Slovenian Electronic Communications Act, the National Table of Frequency Allocations and also a few assorted files with individual frequency licenses. Then I started writing a parser that would turn it all into machine-readable JSON. Then, as you might imagine if you ever encountered the words "PDF", "table" and "parsing" in the same sentence, I gave up after a few days of writing increasingly convoluted and frustrating Python code.

Tabela uporabe radijskih frekvenc

Fast forward two years and I was again going through some documents that discuss these matters. I remembered this old abandoned project. In the mean time, new laws were made, the frequency allocation table was updated several times and the PDFs were structured a bit differently. This meant I had to throw away all my previous work, but on the other hand new documents looked a bit easier to parse. I again took the challenge and this time I managed to parse most of the basic NTFA into JSON after a day of work and about 350 lines of Python.

I won't dive deep into technicalities here. I started with the PDF converted to whitespace-formatted UTF-8 text using the pdftotext tool which comes with Poppler. Then I had a series of functions that successively turned text into structured data. This made it easy to inspect and debug each step. Some of the steps included were "fix typos in document" (there are several, by the way, including inconsistent use of points and commas for decimal marks), "extract column widths", "extract header hierarchy", "normalize service names", etc. If there will be interest, I might present the details in a talk at one of the future Open Data Meetups in Ljubljana.

Once I had the data in JSON, drawing a visualization much like the U.S. one above took another 250 lines using matplotlib. Writing them was much more pleasant in comparison though. In hindsight, it would actually make sense to do the visualization part first, since it was much easier to spot parsing mistakes from the graphical representation than by looking at JSON.

Uporaba radijskih frekvenc v Sloveniji glede na storitev

While it is both local and relatively up-to-date, the visualization as such isn't very good. It still only works for conveying that general idea of fragmentation, but not much else. There are way too many categories for an unaided eye to easily match the colors in the legend with the bands in the graph. It would be much better to have an interactive version where you could point to a band and the relevant information would pop-up (or maybe even point you to the relevant paragraphs in the PDFs). Unfortunately this is beyond my meager knowledge of Javascript frameworks.

My chart also still just lists the ITU categories. Not only do they have very little to do with finding space for future allocations, they are useless for spotting interesting parts of the spectrum. For example, the famous 2.4 GHz ISM band doesn't stand out in any way here - it's listed simply under "FIXED, MOBILE, AMATEUR and RADIOLOCATION" services. All such interesting details regarding licensing and technologies in individual bands is hidden in various regulations, scattered across a vast amount of tables, appendices and different PDF documents. It is often in textual form that currently seems impossible to easily extract in an automated way.

I'm still glad that I now have at least some of this data in computer-readable form. I'm sure it will come handy in other projects. For instance, I might eventually use it to add some automatic labels to my real-time UHF and VHF spectrogram from the roof of the IJS campus.

I will not be publicly publishing JSON data and parsing code at the moment. I have concerns about its correctness and the code is so specialized for the specific document that I'm sure nobody will find it useful for anything else. However, if you have some legitimate use for the data, please send me an e-mail and I will be happy to share my work.

Posted by Tomaž | Categories: Life | Comments »

Materialized Munin display

15.05.2016 21:25

Speaking of Munin, here's a thing that I've made recently: A small stand-alone display that cycles through a set of measurements from a Munin installation.

Munin display

(Click to watch Munin display video)

Back when ESP8266 chip was the big new thing I ordered a bag of them from eBay. The said bag then proceeded to gather dust in the corner of my desk for a year or so, as such things unfortunately tend to do these days. I also had a really nice white transflective display left over from another project (suffice to say, it cost around 20 £ compared to ones you can get for a tenth of the price with free shipping on DealExtreme). So something like this looked like a natural thing to make.

The hardware is not worth wasting too many words on: an ESP8266 module handles radio and the networking part. The display is a 2-line LCD panel using the common 16-pin interface. An Arduino Pro Mini acts as glue between the display and the ESP8266. There are also 3.3 V (for ESP8266) and 5 V (for LCD and Arduino) power supplies and a transistor level shifter for the serial line between ESP8266 and the Arduino.

ESP8266 runs stock firmware that exposes a modem-like AT-command interface on a serial line. I could have omitted the Arduino and ran the whole thing from the ESP8266 alone, however the lack of GPIO lines on the module I was using meant that I would have to use some kind of GPIO extender or multiplexer to run the 16-pin LCD interface. Arduino with the WeeESP8266 library just seemed less of a hassle.

Top side of the circuit in the Munin display.

From the software side, the device basically acts as a dumb display. The ESP8266 listens on a TCP socket and Arduino pushes everything that is received on that socket to the LCD. All the complexity is hidden in a Python daemon that runs on my CubieTruck. The daemon uses PyMunin to periodically query Munin nodes, renders the strings to be displayed and sends them to the display.

Speaking of ESP8266, my main complaint would be that there is basically zero official documentation about it. Just getting it to boot means reconciling conflicting information from different blog and forum posts (for me, both CH_PD and RST/GPIO16 needed to be pulled low). No one mentioned that RX pin has an internal pull-up. I also way underestimated the current consumption (it says 1 mA stand-by on the datasheet after all and the radio is mostly doing nothing in my case). It turns out that a linear regulator is out of the question and a 3.3 V switch-mode power supply is a must.

My module came with firmware that was very unreliable. Getting official firmware updates from a sticky forum post felt kind of shady and it took some time to get an image that worked with 512 kB flash on my module. That said, the module has been working without resets or hangs for a couple of weeks now which is nice and not something that all similar radio modules are capable of.

Inside the Munin display.

Finally, this is also my first 3D printed project and I learned several important lessons. It's better to leave too much clearance than too little between parts that are supposed to fit together. This box took about four hours of careful sanding and cutting before the top part could be inserted into the bottom since the 3D printer randomly decided to make some walls 1 mm thicker than planned. Also, self-tapping screws and automagically hollowed-out plastic parts don't play nice together.

With all the careful measuring and planning required to come up with a CAD drawing, I'm not sure 3D printing saved me any time compared to a simple plywood box which I could make and fit on the fly. Also, relying on the flexibility and precision of a 3D print made me kind of forget about the mechanical design of the circuit. I'm not particularly proud of the way things fit together and how it looks inside, but most of it is hidden away from view anyway and I guess it works well enough for a quick one-off project.

Posted by Tomaž | Categories: Life | Comments »

Clockwork, part 2

10.04.2016 19:39

I hate to leave a good puzzle unsolved. Last week I was writing about a cheap quartz mechanism I got from an old clock that stopped working. I said that I could not figure out why its rotor only turns in one direction given a seemingly symmetrical construction of the coil that drives it.

There is quite a number of tear downs and descriptions of how such mechanisms work on the web. However, very few seem to address this issue of direction of rotation and those that do don't give a very convincing argument. Some mention that the direction has something to do with the asymmetric shape of the coil's core. This forum post mentions that the direction can be reversed if a different pulse width is used.

So, first of all I had a closer look at the core. It's made of three identical iron sheets, each 0.4 mm thick. Here is one of them on the scanner with the coil and the rotor locations drawn over it:

Coil location and direction of rotation.

It turns out there is in fact a slight asymmetry. The edges of the cut-out for the rotor are 0.4 mm closer together on one diagonal than on the other. It's hard to make that out with unaided eye. It's possible that the curved edge on the other side makes it less error prone to construct the core with all three sheets in same orientation.

Dimension drawing of the magnetic core.

The forum post about pulse lengths and my initial thought about shaded pole motors made me think that there is some subtle transient effect in play that would make the rotor prefer one direction over the other. Using just a single coil, core asymmetry cannot result in a rotating magnetic field if you assume linear conditions (e.g. no part of the core gets saturated) and no delay due to eddy currents. Shaded pole motors overcome this by delaying magnetization of one part of the core through a shorted auxiliary winding, but no such arrangement is present here.

I did some measurements and back-of-the-envelope calculations. The coil has approximately 5000 turns and resistance of 215 Ω. The field strength is nowhere near saturation for iron. The current through the coil settles somewhere on the range of milliseconds (I measured a time constant of 250 μs without the core in place). It seems unlikely any transients in magnetization can affect the movements of the rotor.

After a bit more research, I found out that this type of a motor is called a Lavet type stepping motor. In fact, its operation can be explained completely using static fields and transients don't play any significant role. The rotor has four stable points: two when the coil drives the rotor in one or the other direction and two when the rotor's own permanent magnetization attracts it to the ferromagnetic core. The core asymmetry creates a slight offset between the former and the latter two points. Wikipedia describes the principle quite nicely.

To test this principle, I connected the coil to an Arduino and slowly stepped this clockwork motor through it's four states. The LED on the Arduino board above shows when the coil is energized. The black dot on the rotor roughly marks the position of one of its poles. You can see that when the coil turns off, the rotor turns slightly forward as its permanent magnet aligns it with the diagonal on the core that has a smaller air gap (one step is a bit more pronounced than the other on the video above). This slight forward advancement from the neutral position then makes the rotor prefer the forward over the backward motion when the coil is energized in the other direction.

It's always fascinating to see how a mundane thing like a clock still manages to have parts in it whose principle of operation is very much not obvious from the first glance.

Posted by Tomaž | Categories: Life | Comments »


28.03.2016 15:17

Recently one of the clocks in my apartment stopped. It's been here since before I moved in and is probably more than 10 years old. The housing more or less crumbled away as I opened it. On the other hand the movement inside looked like it was still in a good condition, so I had a look if there was anything in it that I could fix.

Back side of the quartz clock movement.

This is a standard 56 mm quartz wall clock movement. It's pretty much the same as in any other cheap clock I've seen. In this case, its makers were quick to dispel any myths about its quality: no jewels in watchmaker's parlance means no quality bearings and I'm guessing unadjusted means that the frequency of its quartz oscillator can't be adjusted.

Circuit board in the clock movement.

As far as electronics is concerned, there's not much to see in there. There's a single integrated circuit, bonded to a tiny PCB and covered with a blob of epoxy. It uses a small tuning-fork quartz resonator to keep time. As the cover promised, there's no sign of a trimmer for adjusting the quartz load capacitance. Two exposed pads on the top press against some metallic strips that connect to the single AA battery. The life time of the battery was probably more than a year since I don't remember the last time I had to change it.

Coil from the clock movement.

The circuit is connected to a coil on the other side of the circuit board. It drives the coil with 30 ms pulses once per second with alternating polarity. The oscilloscope screenshot below shows voltage on the coil terminals.

Voltage waveform on the two coil terminals.

When the mechanism is assembled, there's a small toroidal permanent magnet sitting in the gap in the coil's core with the first plastic gear on top of it. The toroid is laterally magnetized and works as a rotor in a simple stepper motor.

Permanent magnet used as a rotor in clock movement.

The rotor turns half a turn every second and this is what gives off the audible tick-tock sound. I'm a bit puzzled as to what makes it turn only in one direction. I could see nothing that would work as a shaded pole or something like that. The core also looks perfectly symmetrical with no features that would make it prefer one direction of rotation over the other. Maybe the unusual cutouts on the gear for the second hand have something to do with it.

Update: my follow-up post explains what determines direction of rotation.

Top side of movement with gears in place.

This is what the mechanism looks like with gears in place. The whole construction is very finicky and a monument to material cost reduction. There's no way to run it without the cover in place since gears fall over and the impulses in the coil actually eject the rotor if there's nothing on top holding it in place (it's definitely not as well behaved as one in this video). In fact, I see no traces that the rotor magnet has been permanently bonded in any way with the first gear. It seems to just kind of jump around in the magnetic field and drive the mechanism by rubbing against the inside of the gear.

In the end, I couldn't find anything obviously wrong with this thing. The electronics seem to work correctly. The gears also look and turn fine. When I put it back together it would sometimes run, sometimes it would just jump one step back and forth and sometimes it would stand still. Maybe some part wore down mechanically, increasing friction. Or maybe the magnet lost some of its magnetization and no longer produces enough torque to reliably turn the mechanism. In any case, it's going into the scrap box.

Posted by Tomaž | Categories: Life | Comments »

The problem with gmail.co

14.03.2016 19:41

At this moment, the gmail.co (note missing m) top-level domain is registered by Google. This is not surprising. It's common practice these days for owners of popular internet services to buy up domains that are similar to their own. It might be to fight phising attacks (e.g. go-to-this-totally-legit-gmail.co-login-form type affairs), prevent typosquatting or purely for convenience to redirect users that mistyped the URL to the correct address.

$ whois gmail.co
Registrant Organization:                     Google Inc.
Registrant City:                             Mountain View
Registrant State/Province:                   CA
Registrant Country:                          United States

gmail.co currently serves a plain 404 Not Found page on the HTTP port. Not really user friendly, but I guess it's good enough to prevent web-based phising attacks.

Now, with half of the world using ...@gmail.com email addresses, it's not uncommon to also mistakenly send an email to a ...@gmail.co address. Normally, if you mistype the domain part of the email address, your MTA will see the DNS resolve fail and you would immediately get either a SMTP error at the time of submission, or a bounced mail shortly after.

Unfortunately, gmail.co domain actually exists, which means that MTAs will in fact attempt to deliver mail to it. There's no MX DNS record, however SMTP specifies that MTAs must in that case use the address in A or AAAA records for delivery. Those do exist (as they allow the previously mentioned HTTP error page to be served to a browser).

To further complicate the situation, the SMTP port 25 on IPs referenced by those A and AAAA records is blackholed. This means that a MTA will attempt to connect to it, hang while the remote host eats up SYN packets, and fail after the TCP handshake timeouts. A timeout looks to the MTA like an unresponsive mail server, which means it will continue to retry the delivery for a considerable amount of time. The RFC 5321 says that it should take at least 4-5 days before it gives up and sends a bounce:

Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days. It MAY be appropriate to set a shorter maximum number of retries for non- delivery notifications and equivalent error messages than for standard messages. The parameters to the retry algorithm MUST be configurable.

In a nutshell, what all of this means is that if you make a typo and send a mail to @gmail.co, it will take around a week for you to receive any indication that your mail was not delivered. Needless to say, this is bad. Especially if the message you were sending was time critical in nature.

Update: Exim will warn you when a message has been delayed for more than 24 hours, so you'll likely notice this error before the default 6 day retry timeout. Still, it's annoying and not all MTAs are that friendly.

The lesson here is that, if you register your own typosquatting domains, do make sure that mail sent to them will be immediately bounced. One way is to simply set an invalid MX record (this is an immediate error for SMTP). You can also run a SMTP server that actively rejects all incoming mail (possibly with a friendly error message reminding the user of the mistyped address), but that requires some more effort.

As for this particular Google's blunder, a workaround is to put a special retry rule for gmail.co in your MTA so that it gives up faster (e.g. see Exim's Retry configuration).

Posted by Tomaž | Categories: Life | Comments »

Some SPF statistics

21.02.2016 19:04

Some people tend their backyard gardens. I host my own mail server. Recently, there has been a push towards more stringent mail server authentication to fight spam and abuse. One of the simple ways of controlling which server is allowed to send mail for a domain is the Sender Policy Framework. Zakir Durumeric explained it nicely in his Neither Snow Nor Rain Nor MITM talk at the 32C3.

The effort for more authentication seems to be headed by Google. That is not surprising. Google Mail is e-mail for most people nowadays. If anyone can push for changes in the infrastructure, it's them. A while ago Google published some statistics regarding the adoption of different standards for their inbound mail. Just recently, they also added visible warnings for their users if mail they received has not been sent from an authenticated server. Just how much an average user can do about that (except perhaps pressure their correspondents to start using Google Mail) seems questionable though.

Anyway, I implemented a SPF check for inbound mail on my server some time ago. I never explicitly rejected mail based on it however. My MTA just adds a header to incoming messages. I was guessing the added header may be picked out by the Bayesian spam filter, if it became significant at any point. After reading about Google's efforts I was wondering what the situation regarding SPF checks looks like for me. Obviously, I see a very different sample of the world's e-mail traffic as Google's servers.

For this experiment I took a 3 month sample of inbound e-mail that was received by my server between November 2015 and January 2016. The mail was classified by Bogofilter into spam and non-spam mail, mostly based on textual content. SPF records were evaluated by spf-tools-perl upon reception. Explanation of results (what softfail, permerror, etc. means) is here.

SPF evaluation results for non-spam messages.

SPF evaluation results for spam messages.

As you can see, the situation in this little corner of the Internet is much less optimistic than the 95.3% SPF adoption rate that Google sees. More than half of mail I see doesn't have a SPF record. A successful SPF record validation also doesn't look like that much of a strong signal for spam filtering either, with 22% of spam mail successfully passing the check.

It's nice that I saw no hard SPF failures for non-spam mail. I checked my inbox for mail that had softfails and permerrors. Some of it was borderline-spammy and some of it was legitimate and appeared to be due to the sender having a misconfigured SPF record.

Another interesting point I noticed is that some sneaky spam mail comes with their own headers claiming SPF evaluation. This might be a problem if the MTA just adds another Received-SPF header at the bottom and doesn't remove the existing one. If you then have a simple filter on Received-SPF: pass somewhere later in the pipeline it's likely the filter will hit the spammer's header first instead of the header your MTA added.

Posted by Tomaž | Categories: Life | Comments »

IEEE 802.11 channel survey statistics

14.02.2016 20:17

I attended the All our shared spectrum are belong to us talk by Paul Fuxjaeger this December at the 32C3. He talked about a kernel patch that adds radio channel utilization data to the beacons sent out by a wireless LAN access point. The patched kernel can be used with OpenWRT to inform other devices in the network of the state of the radio channel, as seen from the standpoint of the access point. One of the uses for this is reducing the hidden node problem and as I understand they used it to improve robustness of a Wi-Fi mesh network.

His patch got my attention because I did not know that typical wireless LAN radios kept such low-level radio channel statistics. I previously did some surveys of the occupancy of the 2.4 GHz band for a seminar at the Institute. My measurements using a TI CC2500 transceiver on a VESNA sensor node showed that the physical radio channel very rarely reached any kind of significant occupancy, even in the middle of very busy Wi-Fi networks. While I did not pursue it further, I remained suspicious of that result. Paul's talk gave me an idea that it would be interesting to compare such measurements with statistics available from the Wi-Fi adapter itself.

I did some digging and found the definition of struct survey_info in include/net/cfg80211.h in Linux kernel:

 * struct survey_info - channel survey response
 * @channel: the channel this survey record reports, mandatory
 * @filled: bitflag of flags from &enum survey_info_flags
 * @noise: channel noise in dBm. This and all following fields are
 *	optional
 * @channel_time: amount of time in ms the radio spent on the channel
 * @channel_time_busy: amount of time the primary channel was sensed busy
 * @channel_time_ext_busy: amount of time the extension channel was sensed busy
 * @channel_time_rx: amount of time the radio spent receiving data
 * @channel_time_tx: amount of time the radio spent transmitting data
 * Used by dump_survey() to report back per-channel survey information.
 * This structure can later be expanded with things like
 * channel duty cycle etc.
struct survey_info {
	struct ieee80211_channel *channel;
	u64 channel_time;
	u64 channel_time_busy;
	u64 channel_time_ext_busy;
	u64 channel_time_rx;
	u64 channel_time_tx;
	u32 filled;
	s8 noise;

You can get these values for a wireless adapter on a running Linux system using ethtool or iw without patching the kernel:

$ ethtool -S wlan0
NIC statistics:
     noise: 161
     ch_time: 19404258681
     ch_time_busy: 3082386526
     ch_time_ext_busy: 18446744073709551615
     ch_time_rx: 2510607684
     ch_time_tx: 371239068
$ iw dev wlan0 survey dump
Survey data from wlan0
	frequency:			2422 MHz [in use]
	noise:				-95 dBm
	channel active time:		19404338429 ms
	channel busy time:		3082398273 ms
	channel receive time:		2510616517 ms
	channel transmit time:		371240518 ms

It's worth noting that ethtool will print out values even if the hardware does not set them. For example the ch_time_ext_busy above seems invalid (it's 0xffffffffffffffff in hex). ethtool also incorrectly interprets the noise value (161 is -95 interpreted as an unsigned 8-bit value).

What fields contain valid values depends on the wireless adapter. In fact, grepping for channel_time_busy through the kernel tree lists only a handful of drivers that touch it. For example, the iwlwifi-supported Intel Centrino Advanced-N 6205 in my laptop does not fill any of these fields, while the Atheros AR9550 adapter using the ath9k driver in my wireless router does.

Short of the comment in the code listed above, I haven't yet found any more detailed documentation regarding the meaning of these fields. I checked the IEEE 802.11 standard, but as far as I can see, it only mentions that radios can keep the statistic on the percentage of the time the channel is busy, but doesn't go into details.

Weekly Wi-Fi channel statistic

Weekly Wi-Fi channel noise statistic

I wrote a small Munin plugin that keeps track of these values. For time counters the graph above shows the derivative (effectively the percentage of time the radio spent in a particular state). With some experiments I was able to find out the following:

  • noise: I'm guessing this is the threshold value used by the energy detection carrier sense mechanism in the CSMA/CA protocol. It's around -95 dBm for my router, which sounds like a valid value. It's not static but changes up and down by a decibel or two. It would be interesting to see how the radio dynamically determines the noise level.
  • channel_time: If you don't change channels, this goes up by 1000 ms each 1000 ms of wall-clock time.
  • channel_time_busy: Probably the amount of time the energy detector showed input power over the noise threshold, plus the time the radio was in transmit mode. This seems to be about the same as the sum of RX and TX times, unless there is a lot of external interference (like another busy Wi-Fi network on the same channel).
  • channel_time_ext_busy: This might be a counter for the secondary radio channel used in channel binding, like in 802.11ac. I haven't tested this since channel binding isn't working on my router.
  • channel_time_rx: On my router this increments at around 10% of channel_time rate, even if the network is completely idle, so I'm guessing it triggers at some very low-level, before packet CRC checks and things like that. As expected, it goes towards 100% of channel_time rate if you send a lot of data towards the interface.
  • channel_time_tx: At idle it's around 2% on my router, which seems consistent with beacon broadcasts. It goes towards 100% of channel_time rate if you send a lot of data from the interface.

In general, the following relation seems to hold:

\frac{t_{channel-tx} + t_{channel-rx}}{t_{channel}} < \frac{t_{channel-busy}}{t_{channel}} < 100\%
Posted by Tomaž | Categories: Life | Comments »

Button, button. Who's got the button?

19.01.2016 19:07

As much as I try, I can't switch away from GNOME. Every once in a while I will try to use some other desktop environment in Debian and I invariably switch back after a month. The fact is that for me, GNOME seems to come with fewest little annoyances that need manual fixing or digging through configuration files or source code. Or it might be that I'm just too used to all the small details in how GNOME does things on the user interface side. It's also possible that I understand enough about GNOME internals at this point that when things stop working I usually find the solution pretty quickly.

That does not mean, however, that GNOME no longer manages to surprise me. This is a story about the latest such occurrence. It's a story about a button. This button in particular:

Rotation lock button in GNOME 3.14

I noticed this button after upgrading my work laptop to Debian Jessie that comes with GNOME 3.14. The most curious thing about it was that it does not appear on my home desktop computer with a supposedly identical Debian Jessie setup. What makes my laptop so special that it sprouted an extra button in this prime piece of screen real-estate?

Being the kind of person that first looks into the documentation, I opened the GNOME 3.14 Release Notes. However, there is no note in there about any mysterious new buttons on the click-top-right-corner menu. I was not surprised though, since it might have been added in any of several GNOME versions that were released between now and the previous Debian Stable.

The button is marked with a circular arrow icon. The kind of icon that is often used to distinguish a reboot in contrast to a shutdown when positioned near the power-switch icon. Like for instance on this Ubuntu shutdown dialog:

Screenshot of Ubuntu shutdown dialog.

I don't think it was unreasonable then to assume that this is a reboot button. Anyway, when documentation about a thing fails you, the best next step is usually to poke the thing with a stick. Preferably a long and non-conductive one.

Unfortunately, this approach failed to uncover the purpose of the mysterious button. Upon closer inspection the button did change a bit after clicking on it, but seemingly nothing else happened. On the second thought, it would be kind of weird for GNOME to have a reboot button when at one point it lacked even a shutdown option, for a reason nobody quite understood.

In a final attempt to discover the purpose of the catching-its-own-tail arrow, I hovered the mouse cursor over it. Of course, no helpful yellow strip of text showed up. Tooltips are obsolete now that everything is a smartphone.

At that point I ran out of ideas. Since I had no text associated with the button it seemed impossible to turn to a web search for help. In fact, I realized I don't even know how the menu that holds the button is called these days. I complained about it on IRC and someone suggested doing a Google Image search for the button. This first sounded like a brilliant idea, but it soon turned out that visual search for graphical user interface widgets just isn't there yet:

Results of a search for visually similar images.

Of course, I was stubborn enough to keep searching. In the end, a (text) query led me to a thread on StackExchange that solved the conundrum. I don't remember what I typed into Google when I first saw that result, but once I knew what to search for, the web suddenly turned out to be full of confused GNOME users. Well, at least I was not alone in being baffled by this.

In the end, I only wonder how many of those people that went through this same experience also immediately started swinging their laptop around to see whether the screen on their laptop would actually rotate and were disappointed when, regardless of the screen lock button, the desktop remained stubbornly in the landscape mode.

Posted by Tomaž | Categories: Life | Comments »

End of a project

15.11.2015 11:38

Two weeks ago, the CREW project concluded with a public workshop and a closed meeting with reviewers invited by the European Commission. Thus ended five years of work on testbeds for wireless communications by an international team from Belgium, Ireland, Germany, France and Slovenia. I've been working on the project for nearly four of these years as the technical lead for the Jožef Stefan Institute. While I've been involved in several other projects, the work related to CREW and the testbed clusters we built in Slovenia has occupied most of my time at the Institute.

Spectrum Wars demo at the Wireless Community meeting.

Image by Ingrid Moerman

It's been four years of periodical conference calls, plenary and review meetings, joint experiments at the testbeds, giving talks and demonstrations at scientific conferences, writing deliverables for the Commission, internal reports and articles for publication, chasing deadlines and, of course, the actual work: writing software, developing electronics, debugging both, making measurements in the field and analyzing data, climbing up lamp posts and figuring out municipal power grids from outdated wiring plans.

It's funny how when I'm thinking back, writing is the first thing that comes to mind. I estimate I've written about 20 thousand words of documents per year, which does not seem all that much. It's less than the amount of text I publish yearly on this blog. Counting everyone involved, we produced almost 100 peer-reviewed papers related to the project, which does seem a lot. It has also resulted in my first experience working with a book publisher and a best paper award from the Digital Avionics Systems Conference I have hanging above my desk.

Measuring radio interference inside an Airbus cabin mockup.

Image by Christoph Heller

Remote writing collaboration has definitely been something new for me. It is surprising that what works best in the end are Word documents in emails and lots of manual editing and merging. A web-based document management system helped with keeping inboxes within IMAP quotas, but little else. Some of this is certainly due to technical shortcomings in various tools, but the biggest reason I believe is simply the fact that Word plus email is the lowest common denominator that can be expected of a large group of people of varying professions and organizations with various internal IT policies.

Teleconferencing these days is survivable, but barely so. I suspect Alexander Graham Bell got better audio quality than some of the GoToMeetings I attended. Which is where face-to-face meetings come into the picture. I've been on so many trips during these four years that I've finally came to the point where flying became the necessary evil instead of a rare opportunity to be in an airplane. Most of the meetings have been pleasant, if somewhat exhausting 3 day experiences. It is always nice to meet people you're exchanging emails with in person and see how their institutions look like. Highlights for me were definitely experiments and tours of other facilities. The most impressive that come to mind were seeing Imec RF labs and clean rooms in Leuven, the w-iLab.t Zwijnaarde testbed in Ghent and Airbus headquarters near Munich.

Instrumented Roombas at w-iLab.t

(Click to watch Instrumented Roombas at w-iLab.t video)

One of the closing comments at the review was about identifying the main achievement of the project. The most publicly visible results of the work in Slovenia are definitely little boxes with antennas you can see above streets in Logatec and around our campus. I have somewhat mixed feelings about them. On one hand, it's been a big learning experience. Planning and designing a network, and most of all, seeing how electronics fails when you leave it hanging outside all year round. Designing a programming interface that would be simple enough to learn and powerful enough to be useful. If I would do it again, I would certainly do a lot of things differently, software and hardware wise.

Different kinds of sensor nodes mounted on street lights.

While a number of experiments were done on the testbed, practically all required a lot of hands-on support. We are still far from having a fully automated remote testbed as a service that would be useful to academics and industry alike. Another thing that was very much underestimated was the amount of continuous effort needed to maintain such a testbed in operation. It requires much more than one full-time person to keep something as complex as this up and running. The percentage of working nodes in the network was often not something I could be proud of.

For me personally, the biggest take away from the project has been the opportunity to study practical radio technology in depth - something I didn't get to do much during my undergraduate study. I've had the chance to work with fancy (and expensive) equipment I would not have had otherwise. I studied in detail Texas Instruments CC series of integrated transceivers, which resulted in some interesting hacks. I've designed, built and tested three generations of custom VHF/UHF receivers. These were the most ambitious electronic designs I've made so far. Their capabilities compared to other hardware are encouraging and right now it seems I will continue to work on them in the next year, either as part of my post-graduate studies or under other projects.

SNE-ESHTER analog radio front-end.

I have heard it said several times that CREW was one of the best performing European projects in this field. I can't really give a fair comparison since this is the only such project I've been deeply involved so far. I was disappointed when I heard of servers being shutdown and files deleted as the project wound down, but that is how I hear things usually are when focus shifts to other sources of funding. It is one thing to satisfy own curiosity and desire to learn, but seeing your work being genuinely useful is even better. I learned a lot and got some references, but I was expecting the end result as a whole to be something more tangible, something that would be more directly usable outside of the immediate circle involved in the project.

Posted by Tomaž | Categories: Life | Comments »

TEMPerHUM USB hygrometers

11.10.2015 14:19

Last month at a project meeting in Ghent, Hans Cappelle told me about PCsensor TEMPerHUM hygrometers and thermometers. At around 23 EUR per piece, these are relatively cheap, Chinese made USB-connected sensors. I was looking for something like that to put into a few machines I have running at different places. Knowing the room temperature can be handy. For instance, once I noticed that the CPU and motherboard temperature on one server was rising even though fan RPMs did not change. Was it because the change in the room temperature or was it a sign that heat sinks were getting clogged with dust? Having a reading from a sensor outside of the server's box would help in knowing the reason without having to go and have a look.

TEMPerHUM v1.2 USB hygrometers

When I took them out of the box, I was a bit surprised at their weight. I assumed that the shiny surface was metallized plastic, but in fact it looks like the case is actually made of metal. Each came with a short USB extension cable so you can put it a bit away from the warm computer they are plugged in.

My devices identify as RDing TEMPERHUM1V1.2 on the USB bus (vendor ID 0x0c45, product ID 0x7402). They register two interfaces with the host: a HID keyboard and a HID mouse. The blue circle with TXT in it is actually a button. When you press it, the sensor starts simulating someone manually typing in periodic sensor readouts. As far as I understand, you can set some settings in this mode by playing with Caps lock and Num lock LEDs. I haven't looked into it further though since I don't currently care about this feature, but it is good to know that sensors can be used with only a basic USB HID keyboard driver and a text editor.

Text editor with TEMPerHUM readings typed in.

On Linux, udev creates two hidraw devices for the sensor. To get a sensor readout from those without having to press the button, Frode Austvik wrote a nice little utility. For v1.2 sensors, you need this patched version.

The utility is written in C and is easy to compile for more exotic devices. The only real dependency is HID API. HID API is only packaged for Debian Jessie, but the 0.8.0 source package compiles on Wheezy as well without any modifications. OpenWRT has a package for HID API. The sensor however does not work on my TP-Link router because it uses USB v1.10 and the router's USB ports apparently only support USB v2.x (see OpenWRT ticket #15194).

By default, udev creates hidraw device files that only root can read. To enable any member of the temperhum group to read the sensor, I use the following in /etc/udev/rules.d/20-temperhum.rules:

Update: Previously, I made the hidraw device world-writable. That might be a security problem, although probably a very unlikely one.

SUBSYSTEM=="hidraw", ATTRS{idVendor}=="0c45", ATTRS{idProduct}=="7402", MODE="0660", GROUP="temperhum"

Finally, what about the accuracy? I left one of the sensors (connected via an extension cable) to sit overnight in a room next to an Arduino with a DS18B20 and the old wireless weather monitor I use at home. The two spikes you see on the graphs are from when I opened the windows in the morning.

Temperature versus time for three thermometers.

The DS18B20 has a declared accuracy of ±0.5°C, while the Si7021 sensor that TEMPerHUM is supposedly using should be accurate to ±0.4°C. However, TEMPerHUM is showing between 1.5°C and 2.0°C higher reading than DS18B20. This might be because the USB fob is heating itself a little bit, or it might be some numerical error (or they might be using off-spec chips). Other people have reported that similar sensors tend to overestimate the temperature.

Humidity versus time for two hygrometers.

For relative humidity measurements I don't have a good reference. The difference between my weather monitor and TEMPerHUM is around 10% according to this graph. Si7021 should be accurate to within 3% according to its datasheet. My weather monitor is certainly worse than that. So I can't really say anything about the humidity measurement except that it isn't obviously wrong.

In conclusion, except for the incompatibility with some Atheros SoCs I mentioned above, these sensors seem to be exactly what they say they are. Not super accurate but probably good enough for some basic monitoring of the environment.

Posted by Tomaž | Categories: Life | Comments »

Spectrum Wars

25.07.2015 14:42

SpectrumWars will be a programming game where players compete for bandwidth on a limited piece of radio spectrum. It's also apparently a title that tends to draw attention from bored conference attendees and European commissioners alike.

SpectrumWars XKCD spoof.

with apologies to Randall Munroe

With the CREW project nearing completion, it was decided that one of the last things created in the project will be a game. This might seem an odd conclusion to five years of work from 8 European institutions that previously included little in terms of what most would consider entertainment. We built testing environments for experimentation with modern wireless communications aimed at the exclusive club of people that know what CSMA/CA and PCFICH stand for. In the end, I'm not sure we managed to convince them that what we did was useful to their efforts. Much less I guess the broader public and potential students that would like to study this field.

What better to correct this than to present the need for smart radio testbeds in the form of a fun game? The end of the project is also the time when you want to show off your work in way that reaches the broadest possible audience. Everything to raise the possibility that powers-that-be grant you funding for another project.

In any case, SpectrumWars is a fun little thing to work on and I gladly picked it up in the general lethargy that typically suffuses European projects on their home stretch. The idea stems from a combination of the venerable RobotWar game concept, DARPA Spectrum Challenge and some previous demonstrations by the CREW project at various conferences. Instead of writing a program for a robot that fights off other robots, players program radios. Turning the turret becomes tuning the radio to a desired frequency. A spectrum analyzer replaces the radar and the battle arena is a few MHz of vacant spectrum. Whoever manages to be best at transferring data from their transmitter to their receiver in the presence of competition, wins.

A desk-top setup for SpectrumWars with VESNA sensor nodes.

In contrast to the dueling robots, the radios are not simulated: their transmissions happen in the real world and might share the spectrum with Wi-Fi and the broken microwave next door. To keep it fun, things are simplified, of course. The game abstracts the control of the radios to the bare minimum so that a basic player fits into 10 lines of code, and a pretty sophisticated one into 50. This is not the DARPA Challenge where you need a whole team to have a go (unfortunately, there's no five-figure prize fund either).

So far I've made a hardware-agnostic game controller implementation in Python that takes care of all the dirty little details of running the game and defines the player's interface. The radios are VESNA nodes with Texas Instruments CC2500 transceivers and special firmware, but other CREW partners are currently working on supporting their own hardware. A USRP is used as a spectrum sensor that is shared by all users. While a lot of things are still missing (a user-friendly front-end in particular), it was enough to confirm so far that playing with the algorithms in this way is indeed fun (well, at least for my limited idea of fun).

I gave some more details in a talk I had at the Institute last Friday (slides), or you can go check the draft documentation I wrote or the source, if you feel particularly adventurous.

Title slide from the SpectrumWars talk.

In the end, SpectrumWars is supposed to look like a web site where tournaments between algorithms are continuously performed, where one can submit an entry and where a kind of a scoreboard is held. I'm not optimistic that it will be in fact finished to the point where it will be presentable to the general public in the couple of months left for the project. However I think it would be a shame if it went directly into the bit bucket together with many other things when the project funding runs out.

While the future might be uncertain at the moment, I do have a tentative plan to maybe set up an instance of SpectrumWars at the CCC congress in December. The inherently crowded 2.4 GHz spectrum there would certainly present an interesting challenge for the players and the congress does have a history of unusual programming games.

Posted by Tomaž | Categories: Life | Comments »

Weather radar image artifacts

16.06.2015 21:49

The Slovenian environment agency (ARSO) operates a 5 GHz Doppler weather radar on a hill in the eastern part of the country (Lisca). They publish a map with rate of rainfall estimate based on radar returns each 10 minutes.

Gašper of the opendata.si group has been archiving these rainfall maps for several years now as part of the Slovenian weather API project. A few days ago I've been playing with the part of this archive that I happened to have on my disk. I've noticed some interesting properties of the radar image that show up when you look at statistics generated from many individual images.

This is a sum of approximately 17676 individual rainfall maps, taken between December 2012 and April 2013:

Sum of 17676 weather radar images from ARSO.

Technically what this image shows is how often each pixel indicated non-zero rainfall during this span of time. In other words, the rainiest parts, shown in red, had rain around 18% of the time according to the radar. I ignored the information about the rate of rainfall.

More interesting is how some artifacts become visible:

  • There is a blind spot around the actual location of the radar. The radar doesn't scan above a certain elevation angle and is blind for weather directly above it.

  • There are triangular shadows centered on the radar location. These are probably due to physical obstructions of the radar beam by terrain. I've also heard that some of these radars automatically decrease sensitivity in directions where interference is detected (weather radars share the 5 GHz band with wireless LAN). So it is also possible that those directions have higher noise levels.

  • Concentric bands are clearly visible. The image is probably composed of several 360° sweeps at different elevation angles of the antenna. Although it is curious that their spacing decreases with distance. They might also be result of some processing algorithm that adjusts for the distance in discrete steps.

  • It's obvious that sensitivity is falling with distance. Some parts of the map in the north-western corner never show any rain.

  • Some moiré-like patterns are visible in the darker parts of the image. They are probably the result of interpolation. The map has rectangular pixels aligned with latitude and longitude while the radar produces an image in polar coordinates.

I should mention that ARSO's weather radar system got upgraded in 2014 with a second radar. Newer rainfall maps are calculated from combined data from both radars. They likely have much fewer artifacts. Unfortunately, the format of the images published on the web changed slightly several times during that year, which makes them a bit less convenient to process and compare.

Posted by Tomaž | Categories: Life | Comments »