python rftest.py

24.10.2012 20:25

Recently I've been working on an automatic test harness for the spectrum sensing hardware I've designed at the Jožef Stefan Institute. It takes the form of a Python script that talks to a vector signal generator on one end and a VESNA on the other. It tests the receiver under different conditions and after a few minutes or so (depending on which tests have been selected) writes a nice report that includes characteristics like receiver noise figure, power level detector offset and linearity errors, local oscillator accuracy and so on, plus all the raw data in case it's needed to double-check the calculations.

Looking back, this should be the first thing to make after I got the prototype circuit boards on my desk. It would definitely spare quite a few hours of dull button pressing and note taking. Unfortunately even this academic environment is not immune to pressing dead-lines and sometimes it just makes more sense to go with the slow-and-certain way of manually doing things than opt for including software development in a task that will in all certainty save hours of work but might also explode into a week-long debugging session.

Automated radio receiver measurement setup.

In the end however, the development went quite smoothly. Controlling our stack of Rohde&Schwarz instruments turned out to be surprisingly easy. Using the USB Test & Measurement Class under Linux is as complicated as reading and writing strings to a /dev/usbtmc3 character device (my Debian Squeeze system already included all the necessary kernel modules). The high-level interface is the same as the one used on the venerable GPIB and consists of plain human-readable ASCII statements that control various aspects of the instrument or return measurement results. On our equipment the same interface is also exported as a telnet-like service on an Ethernet port, but I opted for USB since my laptop only has one wired network interface.

With this setup it's now finally feasible to exhaustively test a pile of receivers. This will also give me some statistical data to see how various characteristics differ from one device to the other without the need to depend solely on data provided by component manufacturers. Definitely a good thing to have on hand when other researchers come knocking on the door asking why their theoretical calculations don't fit the measurements.

RSS indicator step response

Above is one interesting result of these measurements: since these receivers are used for spectrum sensing, it's important to know how the power level detectors behave in different conditions. This is the result of a step-response test of the TV-band UHF receiver based on the NXP TDA18219HN tuner. This tuner has several stages of automatic gain control and the power level detector takes their individual gains into account when calculating the power at the antenna interface. While the turn-on response is nice and fast, it takes quite some time for the tuner to settle back and provide an accurate measurement when the signal source is turned off.

Assorted collection of automated radio receiver test results.

This a collection of test results for a narrow-band sub-1 GHz tuner based on the Texas Instruments CC1101 chip. VESNA spectrum-sensor application supports quite a few different configurations using this hardware. While the measurements above show a properly working receiver, so far the automated tests have already found several software bugs regarding the receiver configuration and one out-of-spec radio board.

As usual, the software used to make these graphs is available on-line under a GPL version 3 license in the SensorLab repository on GitHub.

Posted by Tomaž | Categories: Analog | Comments »

Bletchley park

16.10.2012 17:43

I'm spending a few days in London and a this Sunday Jure and I visited Bletchley Park, the famous home of British code breakers during the second world war who cracked the German Enigma codes. It has been converted to a museum recently and is less than an hour of train ride north from London. I've heard some praise for the place and thought it might be worthwhile to check it out. We were not disappointed.

Entrance to Bletchley Park

The place is quite large, quite a bit larger than I expected. Not all of the former military buildings have been renovated though. Those that have been now house a collection about the happenings in Bletchley Park during second world war and the National museum of computing. A few other organizations also have a place there (like the National Radio Centre), but we ran out of time to check them out.

Sorry my decor is tired but a little more funding is required

The Bletchley Park collection includes probably just about every model of the Enigma machine ever made in Germany, plus a few that were modified during the decoding effort (one for instance has mechanical counters installed for statistical analysis). There's an impressive collection of other cyphering machines from all over the world as well, although I missed a bit more thorough description of their operation.

Lorenz cypher machine

All of this fades though in comparison with the centerpiece of the collection - the replicas of code breaking machines that were originally designed and constructed in this place, and later destroyed after the war to conceal the progress British cryptographers have made from the rest of the world. It shows incredible effort made by retired engineers that dedicated years of their life to painstakingly rebuild old machines, often reinventing parts and procedures for which no documentation could be found.

Back side of the Bombe machine replica

The first replica, a Bombe, is an electromechanical device that basically did an exhaustive search for an Enigma key given a known plain text. It's a wonderful missing link between computers and steam engines. While long strands of wires on one side would fit perfectly well in a backside of a Cray, mechanical insides are made with the best practices of steam engine technology, full of sprockets, cam shafts threaded with thin copper tubes supplying lubricating oil, slowly dripping into pans below.

Inside of the Bombe machine replica

There is also a nice tribute to Alan Turing, his life and work, including the official apology from the British government for his wrongful prosecution that led to his suicide.

Statue of Alan Turing in Bletchley Park

The other replicated machine, the first electronic computer called Colossus, was an even larger effort and supposedly included a public call for old equipment using vacuum tube to harvest for usable electronic components. It's housed in the same building that housed the original and has been tested by decoding a message encrypted using one of the preserved Lorenz encryption machines.

Colossus machine replica

The Colossus is also where the Bletchley Park collection touches the Computer Museum one. That one focuses on the history of computing in Britain and goes from the WITCH computer (first time I heard of a Dekatron tube) to the modern age and is much too large to go into any detail here. It includes everything from big main frames to home microcomputers, with the BBC micro having a prominent place. As with the Bletchley Park collection, it's staffed by engineers that will gladly enter into a lengthy debate about this or that tiny detail of the machines they lovingly care for.

HP 250

So in conclusion, Bletchley Park was definitely worth a visit and I wouldn't mind visiting it again on occasion, as one day just wasn't enough to go into any detail when examining all the exhibits.

Posted by Tomaž | Categories: Life | Comments »

Measuring capacitors

11.10.2012 20:18

Sometimes little, trivial things keep bothering me. For instance that mystery regarding the switching power supply for the OLED display. Tests have shown the unexpected drop in supply voltage doesn't affect the quality of the displayed image, so it's a non-issue as far as I can see. But I guess it's the matter of engineering pride to find out what exactly has been going on. Recall that I've blamed a 4.7 µF chip ceramic capacitor from Murata to have less capacitance than it should. Well, I've been wrong.

For the production run of the Arduino OLED shield I've ordered another roll of capacitors, this time from Multicomp. The power supply circuit using them however behaves exactly like my prototype. Getting two bad shipments is just impossible I guess, so my brilliant deductions in that previous blog post must have gone seriously wrong somewhere.

To answer the capacitance question I rigged together a simple 555 timer circuit and measured capacitors from both batches using two methods: first by using a current-source and calculating the capacitance from the voltage time derivative, and second by measuring the time constant in a RC relaxation circuit.

Capacitor measurement, dU/dt method

Murata capacitors yielded 3.5 µF using the dU/dt method, Multicomp capacitors yielded 3.3 µF.

Capacitor measurement, RC method

Murata capacitors yielded 5.1 µF using the RC method, Multicomp capacitors yielded 3.9 µF.

Now these home-grown measurements are nowhere near exact of course, but some back of the envelope estimates show that the declared value (4.7 µF with up to -20% tolerance) can most certainly lie within the error margins and my initial estimate of 1.4 µF doesn't.

So, if capacitors can't be blamed for the inconsistency, what can be? One of the wrong assumptions I made was that the switching power supply can provide at most 60 mA. That's certainly wrong as a simple experiment showed that in short-circuit it can source close to half an ampere. Investigating further it turned out that I didn't properly account for current regulation delays in the control IC and core saturation when the switcher is operating in such extreme circumstances.

In conclusion, now my opinion is that the initial measurements were correct and that in the pre-charge cycle the display indeed does sink a considerable amount of current. The fact that the results of that add-a-known-capacitance trick fit so well with the theory at the time must have been just a coincidence.

Posted by Tomaž | Categories: Analog | Comments »

Capture 0.0.5

05.10.2012 18:21

There's a new version of the capture code available from the am433 distribution. This version allows you to disable modulation auto-detection using the -M command-line argument and has a more robust detection of plain binary encoded packets.

In case you don't know what I'm talking about: capture tool is a part of my am433 project. It takes a low-frequency digital signal (say, with a 48000 Hz sampling rate or there-about), chops it into individual packets and applies some heuristics to determine the packet modulation type. You can think of it as a box that takes a continuous stream on one end and returns decoded packets and some meta-data on the other end. Optionally you can plug it into libpcap and your favorite packet dissector.

Screenshot of capture and Audacity

It was originally meant to decode the digitalized baseband data from my simple 433 MHz AM receiver. However it turned out quite useful for other applications as well. For example, it can be used with GNU Radio to decode transmissions on various frequency bands and modulations, as long as we are talking about devices that communicate infrequently with small packets (EnergyCount 3000 is one such case). With a proper receiver hardware it can even be used to decode infrared transmissions.

It can currently get data from an ALSA device or a file. You can also point it to a named pipe, which is currently the easiest way of using it from GNU Radio. It would be great though if someone would wrap it in a proper GNU Radio block.

Posted by Tomaž | Categories: Code | Comments »

About Digi Connect ME

03.10.2012 23:32

Remember my recent story about Atmel modules? For the last two days I had a very strong déjà-vu feeling when I was trying to debug an issue that popped up in the last minute and is preventing a somewhat time critical experiment from being performed on one of our VESNA testbeds.

It turned out that while 7-bit ASCII strings were correctly transferred between VESNAs and a client calling a HTTP API, binary data sent down the same pipeline got corrupted somewhere on the way. Plus, to make things just a bit more interesting, it sometimes also made the whole network of VESNAs unreachable.

VESNA coordinator with Digi Connect ME module.

Now, problems like this are unfortunately quite a common occurrence. Before data from a sensor node reaches a client, it must pass through multiple hops in a (notoriously unreliable) ZigBee mesh network, a coordinator VESNA that serves as a proxy between the mesh and a TCP/IP tunnel, a Java application on the end of the tunnel that translates between a home-grown HTTP-like protocol used between VESNAs and proper HTTP and finally an Apache server acting as a reverse-proxy for the public HTTP API. Leaky abstractions and encapsulations are a plenty and it's not unusual to have some code somewhere along the line assume that the data it is passing is ASCII-only, valid UTF-8 or something else entirely.

I won't bore you with too many details about debugging. I opted for a top-down approach, with the Java layer being the main suspect based on previous experience. After that got a thorough check, I moved to sniffing the tunnel with Wireshark, which has an extra complication that it's SSL encrypted. Luckily, it doesn't seem to use Diffie–Hellman key exchange, meaning that the SSL dissector was quite effective once I figured out how to extract private keys from Java's keystore. That also seemed to look OK, so the next layer down was the SSL tunnel endpoint, which is a Digi Connect ME module.

This is basically a black box that takes a duplex RS-232 connection on one end and tunnels it through an encrypted TCP/IP connection. It's an deceptively simple description though. In fact Digi Connect ME is a miniature computer with an integrated web server for configuration, scripting support and a ton of other features (I have close to 500 pages of documentation for it on my computer and I only downloaded documents I though might be useful in debugging this issue).

VESNA coordinator hooked up to a logic analyzer.

Anyway, when I looked closely, the problem was quite apparent. On the RS-232 side the module was set to use XON/XOFF software flow-control. This obviously won't work when sending arbitrary binary data. Not only will the module interpret XON and XOFF bytes as special and so drop them from the pipeline, an XOFF that is not followed by XON will also halt the transmission, leading to hangs and timeouts. The fix looked simple enough: switch to hardware flow control using dedicated CTS and RTS lines.

As you might have guessed, it was not that easy. It turns out that when hardware flow control is enabled in Digi Connect ME, it will randomly duplicate characters when sending them down the serial line. Again, the suspicion first fell on our homegrown UART driver on the VESNA side, but a logic analyzer trace below confirmed that it's the actual Digi Connect module that is the culprit and not some bug on our side.

Logic analyzer trace from a Digi Connect ME module.

Now, at this point I'm seriously confused. Digi Connect ME is quite popular and, at least judging from browsing the web, used in a lot of applications. But after the ZigBit module it's also the second piece of hardware that exhibits such broken behavior that I can't believe it has passed any thorough quality check. All of my experience speaks that it must be us that are doing something wrong and in both cases I have tested just about all possibilities where VESNA could be doing something wrong. Actually, I want it to be a mistake on our end because that means I can fix it. But honestly, once you see wrong data being sent on a logic analyzer, I don't think there can be any more doubt. Must really everything turn rotten once you look into it with enough detail?

Posted by Tomaž | Categories: Digital | Comments »