## Spectrum sensing in a nutshell

24.04.2012 20:28

Spectrum sensing is a phrase that is being used a lot at my current job. I have mentioned it before in relation to the experiments in Munich back in February. Let me explain what it means and why it is important.

One possible way of enhancing radio communications in the future is making the receivers and transmitters aware of their environment and capable of adjusting the radio link accordingly. For instance, they could intelligently avoid uncontrollable interference at a specific frequency, cooperatively share a limited part of the spectrum or use frequencies that experience the least fading in the current location. This idea usually falls under the somewhat awkward umbrella of cognitive radio (which sometimes also includes gratuitous applications of strong artificial intelligence and other things not directly related to radio communications).

So broadly speaking, spectrum sensing means measuring the properties of the radio-frequency part of the electromagnetic radiation propagating in an area of interest. In the current real-life usage scenarios you are usually interested in knowing whether there are other third-party transmitters operating in the same part of the spectrum as you. This might be because you don't want them to interfere with your connection. But equally important are cases where you don't want to interfere with them. For instance, recently frequencies where formerly only big, licensed TV operators were allowed to transmit are being opened to general public and consumer devices, with the added catch that these devices must make sure their transmissions will not interfere with licensed users.

The latter use case is especially problematic. If you are only interested in the effect of any third-party transmitters on your radio link, measuring their signal strength at your own antenna is sufficient as the location of the measurement is the same as the point of interest. However electromagnetic field theory says that in general case just by doing measurements at your antenna you can't infer how your transmissions will affect a link between two distant devices in your neighborhood. Empirical rules have been developed though that work in common circumstances with high enough reliability, but they are necessarily hard to satisfy in practice as they require very sensitive spectrum sensing receivers.

How do you detect a transmission? The most simple method is called energy detection - you are simply detecting the received signal level on the antenna and declare a transmission has been detected if the level is high enough above the noise level. Energy detectors work quite similar to classical swept-tuned spectrum analyzers except that they are much simpler and cheaper. Usually an integrated silicon tuner is used. For example Texas Instruments CC2500 is a popular choice for the 2.4 GHz ISM band.

Simple energy detection has one big problem though: you can only detect signals that are significantly above the noise level. For example, in TV band white-spaces FCC requires detection threshold of -114 dBm. At this levels of sensitivity even the unavoidable thermal noise presents major problems. This can be solved though with more advanced methods of detections. For instance, repeating patterns can be still detected when the signal to noise level falls well below unity. And most real-world transmissions include some repetition, so cyclostationary detection doesn't hurt generality much.

In conclusion, to add some practice to all of this theory, here is a spectrum sensing receiver I developed at the Jožef Stefan Institute during the past months. It fits on a VESNA node and is built around the TDA18219HN silicon tuner from NXP. This single chip includes most of the radio-frequency circuitry as well as the intermediate frequency part, a lot of which can be reconfigured through an I2C interface. It's also cheap enough that many such receivers can be used in a sensor network.

The receiver can do energy detection on the VHF and UHF bands with receive bandwidths from 1.7 MHz and 9 MHz and is specifically designed for research into TV white-spaces reuse. In theory it should also be capable of cyclostationary detection using VESNA's CPU, although that has yet to be tested in practice. Here you can see a spectrogram of the UHF TV band that was recorded with it. The central Slovenian DVB-T multiplex can be clearly seen at 562 MHz.

Making this hardware was a lot of fun and I might write a bit more about it in a separate post. There is some ambiguity about the amount of information I can disclose about it though as the documentation for the tuner chip and reference implementation came with some crazy restrictive fine print. However you can already dig through the source code of VESNA spectrum sensing application and my spectrum analyzer Python script (which has been recently updated to work with a properly equipped VESNA in addition to Fun Cube Dongle).

Posted by | Categories: Life | Comments »

11.04.2012 20:22

VESNA is using STM32F1 family of ARM Cortex M3 microcontrollers from ST Microelectronics. These chips have a real-time clock peripheral built-in that can be used to keep track of time and date. In VESNA it uses an external 32.768 kHz tuning-fork quartz oscillator and is running even when the CPU has been power-down to conserve power.

The clock can be used in a number of ways: it can trigger periodic (e.g. system tick) and non-periodic (e.g. alarm) interrupts or you can simply read its value when you need a timestamp in your code. The latter use might appear to be the simplest, but can be especially problematic as the peripheral stores time in no less than 4 16-bit registers spread out over 16 bytes of address space. They can not be read atomically which can lead to subtle race condition bugs where the clock appears to be wrong for duration of one tick. I recently spent quite some time debugging such a bug and would like to share my findings (for best experience, open up reference manual at chapter 18: Real-time clock).

RTC keeps time in two internal registers: the prescaler RTC_DIV counts down periods of the RTC oscillator. Once it reaches zero it is reset and the counter RTC_CNT register gets incremented. These two registers aren't directly accessible - instead each of them has two 16-bit shadow registers on a CPU-accessible bus APB1 that get periodically updated with fresh values synchronously to the CPU bus clock. These are called RTC_DIVH, RTC_DIVL, RTC_CNTH and RTC_CNTL in the documentation.

VESNA uses what is likely the most common configuration: the prescaler is set so that it wraps around each 32768 cycles, making RTC_CNT count seconds while RTC_DIV can be used to keep fractional seconds with around 30 μs precision.

There are two important things to watch out:

• As mentioned before, you can't read the four values atomically. This means that between reading say RTC_CNTH and RTC_DIVL the values might have changed. In the best case this means you get a value off by one RTC tick. In the worst case, lower registers just overflowed into a RTC_CNTH increment and the value you read is off by 18 hours.
• RTC_CNT only gets incremented one clock tick after RTV_DIV gets reset.

First, you might be tempted to make the four bus reads atomic by synchronizing the reads with the shadow register update. There is a RTC_CRL_RSF registers synchronized flag that gets set by hardware each time the shadow registers are updated. I have tried this by thinking that if I read the values immediately after it gets set the values won't change for another RTC clock period (which should be plenty, considering RTC runs on the order of 10 kHz and the CPU on the order of 10 MHz). This however does not work reliably for some reason - the documentation only says that this works for the first update of the register anyway. Such synchronization also slows down the clock read-out function and even makes its run time unpredictable.

The second point is actually documented in the datasheet if you look carefully at the timing diagram in the real-time clock chapter. But it is easy to overlook and I wasted more than one day thinking that observing that behavior is due to some problem in my code. It also makes detecting counter overflow somewhat more complicated.

In the end, I went with code like this:

uint16_t divl1 = RTC_DIVL;
uint16_t cnth1 = RTC_CNTH;
uint16_t cntl1 = RTC_CNTL;

uint16_t divl2 = RTC_DIVL;
uint16_t cnth2 = RTC_CNTH;
uint16_t cntl2 = RTC_CNTL;

uint16_t divl, cnth, cntl;

if(cntl1 != cntl2) {
/* overflow occurred between reads of cntl, hence it
* couldn't have occurred before the first read. */
divl = divl1;
cnth = cnth1;
cntl = cntl1;
} else {
/* no overflow between reads of cntl, hence the
* values between the reads are correct */
divl = divl2;
cnth = cnth2;
cntl = cntl2;
}

/* CNT is incremented one RTCCLK tick after the DIV counter
* gets reset to 32767, so to correct for that increment
* the seconds count if DIV just got reset */
uint32_t sec = (((uint32_t)cnth) << 16 | ((uint32_t)cntl));
if(divl == 32767) sec++;

/*
*        1000000                   15625
* usec = ------- * (32767 - div) = ----- * (32767 - div)
*         32768                     512
*/

uint32_t usec = 15625 * (32767 - ((uint32_t)divl)) / 512;


This code makes two assumptions: that RTC_CNTH is unused (i.e. prescaler divides the oscillator frequency by less than 65536) and that the CPU is fast enough to read the four registers in less than one increment of the counter registers. Note that the latter one can be affected by interrupt service routines, so if you have a slow CPU clock, fast running RTC and/or long-running ISRs it might be necessary to disable interrupts while reading the RTC registers.

A version of this function that would work with any prescaler setting would make a nice addition to libopencm3, but I have yet to come up with one elegant enough to warrant a patch.

Note that currently both libopencm3 with rtc_get_counter_val() and rtc_get_prescale_div_val() and STM's FWLIB with RTC_GetCounter() and RTC_GetDivider() get this wrong. Also they don't support getting both values in a consistent way. There is a discussion about this issue on STM32 forums and the solution given there is functionally identical to mine (though I don't like the potential for a goto-induced infinite loop).

Posted by | Categories: Digital | Comments »

## Decibels per hertz

05.04.2012 20:10

I promise this isn't turning to a series of math rants. But since I have been lately studying spectrum analyzers and similar machinery let me tell you about a small annoyance that has been bothering me in texts and datasheets covering this topic.

Often when discussing noise that has constant power spectral density over a range of frequencies the level of this noise is given in the logarithmic scale in units of decibels per hertz (for instance thermal noise is often said to be -174 dBm/Hz). This is wrong, as it implies that you need to multiply this value by bandwidth (in hertz) to get the power in dBm when in reality you need to add a logarithm of the bandwidth. Of course, everyone dealing with these equations just knows that. Logarithms turn multiplication into addition right? But then you end up with equations where the two sides of the equal sign have different units and that is just plain sloppy writing.

Here's how you properly convert a formula for Johnson-Nyquist noise into logarithmic units:

P = kT\Delta f

Apply definition of dBm:

P_{dBm} = 10\log{\frac{kT\Delta f}{1\mathrm{mW}}}

See how the value inside the logarithm has no dimension? The value in the numerator is in units of power and that cancels with the milliwatt in the denominator. If you are doing things correctly there should never be any physical unit inside a logarithm or exponential function.

To split off the bandwidth term, multiply and divide by one hertz.

P_{dBm} = 10\log{\frac{kT\Delta f\cdot 1\mathrm{Hz}}{1\mathrm{mW}\cdot 1\mathrm{Hz}}}
P_{dBm} = 10\log{\frac{kT\cdot1\mathrm{Hz}}{1\mathrm{mW}}\cdot\frac{\Delta f}{1\mathrm{Hz}}}
P_{dBm} = 10\log{\frac{kT\cdot1\mathrm{Hz}}{1\mathrm{mW}}}+10\log{\cdot\frac{\Delta f}{1\mathrm{Hz}}}

Note that you still have dimensionless values inside logarithms. There is no need to invent magic multiplication factors "because of milliwatts" or fix the units of the variables in the equation. The division by 1 Hz in the bandwidth part also nicely solves the confusion that happens when you have bandwidth defined in units other than hertz.

So how do you concisely write this value? There is no short notation that I'm aware of that conveys the proper meaning. I would simply write out that noise level is -174 dBm at 1 Hz bandwidth and leave it at that.

Now let the flames come that this is school nonsense and that real engineers with real dead-lines don't do any of this fancy dimensional analysis stuff and that back-of-the-envelope calculations just work since you just know that this here number is in watts and that there is in kilohertz.

Posted by | Categories: Ideas | Comments »