## Unreasonable effectiveness of JPEG

28.05.2013 20:41

A colleague at the Institute is working on compressive sensing and its applications in the sensing of the radio frequency spectrum. Since what ever he comes up with is ultimately meant to be implemented on VESNA and the radio hardware I'm working on, I did a bit of reading on this topic as well. And as it occasionally happens, it led me to a completely different line of thought.

The basic idea of compressive sensing is similar to lossy compression of images. You make use of the fact that images (or many other real-world signals) have a very sparse representation in some forms. For images in particular the most well known forms are discrete cosine and wavelet transforms. For instance, you can ignore all but a few percent of the largest coefficients and a photograph will be unnoticeably different from the original. That fact is exploited with great success in common image formats like JPEG.

Image by Michael Gäbler CC BY 3.0

What caught my attention though is that even after some searching around, I couldn't find any good explanation of why common images have this nice property. Every text on the topic I saw simply seemed to state that if you transform an image so-and-so, you will find out that a lot of coefficients are near zero and you can ignore them.

As Dušan says, images that make sense to humans only form a very small subset of all possible images. But it's not clear to me why this subset should be so special in this regard.

This discussion is about photographs which are ultimately two-dimensional projections of light reflected from objects around the camera. Is there some kind of a succinct physical explanation why such projections should have in the common case sparse representations in frequency domain? It must be that this somehow follows from macroscopic properties of the world around us. I don't think it has a physiological background as Wikipedia implies - mathematical definition of sparseness doesn't seem to be based on the way human eye or visual processing works.

I say common case, because that certainly doesn't hold for all photographs. I can quite simply take a photograph of a sheet of paper where I printed some random noise and that won't really compress that well.

It's interesting if you think about compressing video, most common tricks I know can be easily connected to the physical properties of the world around us. For instance, differential encoding works well because you have distinct objects and in most scenes only a few objects will move at a time. This means only a part of the image will change against an unmoving background. Storing just differences between frames will obviously be more efficient than working on individual frames. Same goes for example for motion compensation where camera movement in the first approximation just shifts the image in some direction without affecting shapes too much.

However, I see no such easy connection between the physical world and the sparseness in the frequency domain, which is arguably much more basic to compressing images and video than tricks above (after all, as far as I know most video formats still use some form of frequency domain encoding underneath all other processing).

Posted by | Categories: Ideas | Comments »

## AFG-2005 follow up

26.05.2013 12:56

Here is some additional information about the GW Instek AFG-2005 function generator I reviewed earlier.

My first actual use of this instrument left a pretty bad impression. My father and I attempted to perform some tests on his home made short-wave transceiver and it failed badly. For some reason most commands we sent from the computer over the USB would either crash the AFG-2005 or produce very unpredictable results so we were not able to do anything useful with it. I'm still not sure what was the reason.

We used a somewhat longer USB cable (maybe 5 m) so one possibility was that data on the cable got corrupted. Considering that there was some high-powered RF hardware near by, I think that is not that far fetched. Linux kernel didn't show any complaints though which is weird. The other theory is that there was some incompatibility with the kernel that was running on the Ubuntu machine we were using. User space code was exactly the same that later ran fine at my place with AFG-2005 connected to my own computer.

Anyway, I published some Python code for controlling the AFG-2005 from a Linux computer. Currently it's pretty rudimentary, but takes care nicely of the specifics of the IEEE-488 implementation on this instrument.

To get it, run:

```\$ git clone https://github.com/avian2/gwinstek-tools.git
```

Example code to generate a 100 Hz square wave:

```from gwinstek.afg import AFG2000

afg = AFG2000("/dev/ttyACM0")
afg.set_waveform(100, [-1.0, 1.0])
```

Some more notes regarding things I said in the previous post:

About synchronization issues with USBTMC: Apparently IEEE-488 specifies some special commands to deal with command concurrency and synchronization. This somewhat dated GPIB programming tutorial explains it nicely. AFG-2005 doesn't seem to support neither *WAI nor *OPC commands though.

Regarding error reporting, IEEE-488 specifies an error queue where errors are accumulated and can be read by the instrument controller using the SYSTEM:ERROR? query or cleared using the *CLS command. While the manual doesn't mention it explicitly, AFG-2005 does support this functionality. Python modules I published above use it to raise Python exceptions on command errors.

Posted by | Categories: Code | Comments »

## Some notes about CC chips

20.05.2013 12:31

Here are two unusual things I noticed while working with Texas Instruments (used to be Chipcon) CC2500 and CC1101 integrated transceivers.

It appears that the actual bit rate in continuous synchronous serial mode can differ significantly from what Texas Instrument's SmartRF Studio 7 calculates. See for example the following measurements that were taken on a CC2500 chip using a 27 MHz crystal oscillator as a reference clock.

MDMCFG4 valueRF studio [baud]measured [baud]
0x8a50.038.5
0x8b100.077.2
0x8c200.0154.0
0x8d400.0305.0

These bit rates were measured using an oscilloscope attached to the clock output of the transceiver, so I trust them to be correct. Bit rates I measured on a CC1101 agree with what SmartRF Studio predicts.

Update: I revisited this issue and the problem was a bug in my code that caused MDMCFG3 register (which also affects data rate) not to be properly programmed on CC2500. Accounting for this bug, the data rates are within 1% of those calculated by SmartRF Studio 7 or from the formula given in the datasheet.

The other issue I saw is symbol mapping for 4FSK modulation in CC1101. It looks like it depends on the configured bit rate. For example, with 200 baud, the symbol to frequency mapping appears to be as follows:

symbolΔf
00−0.33 fdev
01−1.00 fdev
10+0.33 fdev
11+1.00 fdev

However, with 45 baud, the mapping is different, with symbol bit order apparently switched around:

symbolΔf
00−0.33 fdev
01+0.33 fdev
10−1.00 fdev
11+1.00 fdev

Update: It's possible this difference has something to do with when exactly the radio samples the data line in relation to the clock. Either I don't understand exactly what is going on or the radio isn't sampling the data when it is supposed to. Also, the factors of fdev in tables were wrong (symbol frequencies are equally spaced, with maximum deviation from central frequency equal fdev).

Of course, this doesn't matter if you are using two identically configured CC1101 chips on both ends of the radio link. But it is important if you want to use it to communicate with some other hardware.

Posted by | Categories: Digital | Comments »

## GW Instek AFG-2005

18.05.2013 20:57

I've been missing a good function generator for a while now. Over time I've accumulated an assortment of various ad-hoc 555 oscillators and microcontroller hacks I've cobbled together whenever I needed to generate some kind of a signal to test a circuit I worked on. For a while I've been considering joining all of these in one box and slap some kind of an interface on top. I even bought a nice back-lit LCD display and some buttons for the front panel.

Then at one point more or less by accident I stumbled upon this low-end line of function generators from GW Instek and they looked cheaper and more versatile than what I was building myself. From what opinions I could find on the web about it they mostly appeared positive. Last week when it came in stock in this corner of the world I bought one.

After a couple of hours of playing with it I can say it's a perfectly functional instrument that as far as I can see works as specified. However as you might expect, the fact that it costs around 150 € less than the next cheapest arbitrary waveform generator definitely shows itself.

Interestingly, my complaints mostly have to deal with the user interface. For instance, the first thing I noticed is that there is no visual indicator whether the display is set to show signal amplitude for a 50 Ω or high-impedance output termination. Since this means a factor of two in output voltage level a wrong setting can easily be fatal for some circuits. I guess resetting to high-Z setting before turning off the generator would be a prudent habit to keep.

One thing that sold this device for me is the USB connectivity. After being spoiled by the equipment I have at work, I missed being capable of running some automated tests for projects at home. While the manual doesn't mention the word USBTMC anywhere, the remote-control commands look suspiciously identical to the venerable industry-standard GPIB language.

When I connected AFG-2005 to my computer, first thing I noticed is that it doesn't actually use USBTMC protocol. Instead, it's behaves like a serial modem with USB ACM interface. As far as Linux user space is concerned, this doesn't seem to make a big difference - both kernel drivers expose a simple character device for interaction with the device. For USBTMC I know that when the write() syscall returns, the device has already accepted the command and you can expect it will be giving the signal you programmed. But I'm not sure whether this also holds in this case.

The implementation of the protocol is somewhat buggy though. First of all, commands require a terminating ASCII line feed (\n) character, which isn't the case with USBTMC devices. Invalid commands are simply ignored - there are no error or diagnostic messages at all.

My device identifies itself like this for the *IDN? command:

```GW INSTEK,AFG-2005,SN:EN810021,V1.11
```

It's quite easy to crash the instrument's firmware by feeding it malformed commands. In that case the front panel stops working and the only way to restore it back to normal is by using the hardware power button.

The front panel display itself is somewhat sluggish and if you issue a lot of commands through the USB it might take a while to catch up with all the settings, even though if you watch the signal generator's output you can see they are applied much faster. The multi-purpose knob has similar issues and if you turn it too fast the values start jumping around instead of going in one direction (this seem to be a common issue though - I've seen a lot of different devices have a problem with that).

Arbitrary function generator works nicely and can be easily programmed from a PC using a DATA:DAC command. What I don't quite understand is the syntax there. In contrast to other instruments I worked with, the manual says that the first argument is the Start address of the arbitrary waveform. It gives an example where this address is 1000, but that doesn't seem to work at all with my instrument. The only address where the ARB output works as expected is 0. Using other addresses I either get garbage output with no correlation to what I wanted to do or crash the firmware.

Arbitrary function generator has a 20 MHz maximum sampling frequency and firmware allows you to fully exploit that, even when the frequency of the signal exceeds 5 MHz maximum for pre-programmed waveforms. The results in that case are less than perfect of course. The most obvious problem is the significant DAC jitter. Here is how a square wave looks like at 10 MHz:

In conclusion, this instrument gives you quite a lot of value for its price. But unsurprisingly, it's not on the level of instruments that have a few zeros more on the left of the Euro sign. I can't stop myself thinking though that a lot of these problems would take a simple firmware patch to fix. If only the manufacturer would provide source and simple means of reprogramming.

Posted by | Categories: Analog | Comments »

## Wireless microphone simulation with VESNA

13.05.2013 11:51

It sounds strange, but analog FM wireless microphones (the kind you find in studios, theaters and soon-to-be-restaurants) are one of the obstacles in the efficient re-use of the UHF band that has been freed with the digital switchover. They operate in the same frequency band as the now silenced analogue TV broadcasts, but on the other hand have never been well regulated. While locations and frequencies of TV transmitters are well known, microphones can appear on a lot of different channels and practically in any location.

New devices operating in the TV white-spaces can't be allowed to interfere with them, so they either can't use frequencies where microphones appear (leading to unused spectrum) or they must detect them using spectrum sensing. Reliable and energy-efficient detection of wireless microphones at low signal levels is an open research topic.

To test different ways of detecting microphones, IEEE has come up with simulation waveforms that can be easily reproduced on lab equipment and are reasonably similar to what microphones transmit in real-life usage. They simulate a silent microphone, microphone used with a soft voice and microphone used with a loud voice by frequency modulating a sine wave with different baseband frequencies and deviations (described in this Word document).

For example, here's how a soft voice simulation looks like on a spectrum analyzer when transmitted by a big and expensive Rohde&Schwarz vector signal generator. This is a 3.9 kHz tone modulated using a 15 kHz deviation:

And this is the same signal transmitted with an USRP N210:

As I mentioned before, VESNA also has some signal generation capabilities. This is how it looks like when the signal is simulated using a direct digital synthesis algorithm and a 4FSK modulator on the CC1101 transceiver:

The order-of-magnitude in price difference in hardware shows itself well enough here as transmission from VESNA obviously differs quite a bit from the desired. But still it's good enough for some detection experiments and of course, you can't have a USRP mounted on every light pole.

Although I haven't yet done detailed measurements, VESNA's transmission actually does seem to comply with FCC's requirement regarding spurious levels for wireless microphones (but not with more stringent European ETSI regulations).

Posted by | Categories: Analog | Comments »

## World wide wheel, reinventing of

09.05.2013 22:00

The direction browsers and web technology are moving these days truly baffles me. As usual in the software world, it's all about piling one shiny feature on top of another. Now, I'm not against shiny per se, but it seems that a lot of these innovations are by people that haven't even took an hour to look at the already existing body of knowledge and standards that has accumulated over the years. With the frenzy of rolling releases and implementation-is-the-standard hotness, it's not even surprising that those are then implemented by browsers before someone with a long enough beard can stand up and shout Hey! We already thought of that in this here RFC.

Take for example all the buzz about finally solving the problem with authentication on the web. Finally, there's a way to securely sign into a website without all the mess with hundreds of hard-for-me-to-remember yet easy-to-guess-by-the-cracker user name and password combinations. Wonderful. Except that this exact thing existed on the web since people did cave paintings and used Netscape to browse the web. It's called SSL client side certificates and, amazingly, worked well enough for on-line banking and government sites even before the invention of pottery and cloud-based identity providers.

But that's just the most glaring case. Another front where this madness continues is pushing things from the old HTTP headers to the fancy new HTML5. Take for example a proposal to add a HTML attribute that defines whether a browser should display something or save it to disk by default. This functionality has existed for ages in the form of a HTTP header, yet this is somehow dismissed as a server-side solution (what does that even mean?).

I wonder how many web developers today are even aware that there exists a mechanism for a client to tell the browser which language the user prefers (but we most certainly need the annoying language selection whole-screen pop-up-and-click-through!). Or that the client can tell the server whether it would rather have a PNG for downloading or a friendly HTML page for viewing in a browser (meh, we'll just fudge that with some magic on-click Javascript handlers).

Now I can see someone laughing and saying how ridiculous this idea is and if I have ever even tried to use one of those ancient features. No it's not, and I have. It's consistently painful. But it's only so because for some reason, browsers long ago decided to make the most horrible interface to such functionality imaginable to man and then forgot to ever fix it. Mostly it's hidden 10 levels down in some obscure dialog box and if banks wouldn't give you click-by-click instructions on how to import a certificate, 99% of people would give up after a few hours and continue chiseling clay tablets. Now imagine if a tenth of time spent in reinventing the wheel would be spent just improving the usability of existing features. Why can't I go to a web page and get a prompt: Hey! This web page wants you to login. Do you want me to use one of these existing certificates or generate a new, throw-away one?. World would be just a tiny bit better, believe me.

In the end, I think modern browsers have focused way too much on improving the situation for the remote web page they are displaying and neglected the local part around it. And I believe this direction is bad in the long run. Consider also the European cookie directive. I'm pretty sure this bizarre catch-22 situation where web pages are now required to manage cookie preferences for you would not be needed if browsers provided a sane interface for handling these preferences in the first place. My Firefox has three places (that I know of!) where I can set which websites are allowed to store persistent state on my computer. Plus it manages to regularly lose them, but that's a different story.

Posted by | Categories: Life | Comments »

## Custom display resolutions in QEMU

04.05.2013 19:19

QEMU emulates a VESA-compliant VGA card that supports a static set of standard and a-bit-less-standard display resolutions. However, this set is far from universal and for example doesn't include the wide-screen 1600x900 resolution on my laptop. This means that running a virtualized OS full-screen would always stretch the display somewhat and give a blurry image.

Here's how I fixed that. Following instructions should pretty much work for adding an arbitrary resolution to QEMU's virtualized graphics card (but only when using -vga std). They are based on this patch.

Fetch QEMU source (I used 1.4.0) and edit roms/vgabios/vbetables-gen.c. For instance, to add 1600x900 I applied this patch:

Update: this patch still works for QEMU release 2.5.0. If you are using a git checkout instead of the release tarball, see comments below.

```--- qemu-1.4.0.orig/roms/vgabios/vbetables-gen.c	2013-02-16 00:09:22.000000000 +0100
+++ qemu-1.4.0/roms/vgabios/vbetables-gen.c	2013-05-04 11:46:55.000000000 +0200
@@ -76,6 +76,9 @@
{ 2560, 1600, 16                     , 0x18a},
{ 2560, 1600, 24                     , 0x18b},
{ 2560, 1600, 32                     , 0x18c},
+{ 1600,  900, 16                     , 0x18d},
+{ 1600,  900, 24                     , 0x18e},
+{ 1600,  900, 32                     , 0x18f},
{ 0, },
};
```

Now re-build the VGA BIOS binary image (apt-get install bcc first):

```\$ cd roms/vgabios
\$ make stdvga-bios
```

QEMU's make install will not install the image you just built. Instead, it will use an already built binary they ship with the source. Therefore you have to install it manually:

```\$ cp VGABIOS-lgpl-latest.stdvga.bin \$PREFIX/share/qemu/vgabios-stdvga.bin
```

Of course, replace \$PREFIX with the prefix you used when installing QEMU (/usr/local by default). Now when you start a virtualized OS you should see the resolution you added to vbetables-gen.c (for example as reported by xrandr or under Screen resolution in Windows).

Posted by | Categories: Code | Comments »

## Cost of a Kindle server

01.05.2013 10:48

I was wondering how much running a Kindle as an always-on, underpowered Debian box was costing me in terms of electricity. So I plugged it into one of the Energycount 3000 devices and monitored its power consumption over the last 4 days. This took into account the power consumption of the Kindle as well as the efficiency of a small Nokia cell phone charger I'm using to power it.

ec3k reported an average power of 1.0 W (and maximum 2.6 W). Dividing the watt-seconds count with time also yielded 1 W to three decimal places. This nice round number makes me suspect that it's due to limited precision of the measurement, but let's consider it accurate for the moment.

1 W is equal to 0.72 kWh per month. With the current prices I'm paying for electricity this costs me 0.083 € per month. For comparison, a cup of synthetic-tasting coffee from a machine at work costs around twice as much and running my desktop machine all the time would be around a hundred times as expensive.

Posted by | Categories: Life | Comments »