Correcting the GNU Radio power measurements

18.04.2015 11:20

I was previously writing about measuring the noise figure of a rtl-sdr dongle. One of the weird things I noticed about my results was the fact that I arrived at two different values for the noise figure when using two methods that should in theory agree with each other. In the Y-factor method I used a Gaussian noise source as a reference while in the twice-power method I used an unmodulated sine wave signal. Investigating this difference led me to discover that power measured using the GNU radio power detector I made was inexplicably different for modulated signals versus unmodulated ones. This was in spite of the fact that the level indicator on the signal generator and a control measurement using a R&S FSV signal analyzer agreed that the power of the signal did not change after switching on modulation.

This was very puzzling and I did a lot of different tests to find the reason, none of which gave a good explanation of what was going on. Finally, Lou on GNU Radio Discuss mailing list provided a tip that led me to finally solve this riddle. It turned out that due to what looks like an unlucky coincidence, two errors in my measurements almost exactly canceled out, obscuring the real cause and making this look like a impossible situation.

R&S SMBV and FSV instruments connected with a coaxial cable.

What Lou's reply brought to my attention was the possibility that the power measurement function on the signal analyzer might not be showing the correct value. He pointed out that the power measurements might be averaged in log (dBm) scale. This would mean that power of signals with a non-constant envelope (like a Gaussian noise signal I was using) would be underestimated compared to unmodulated signals. A correct power measurement using averaging in the linear (mW) scale, like the one implemented in my GNU radio power meter, would on the other hand lack this negative systematic error. This would explain why FSV showed a different value than all other devices.

To perform control measurements I was using a specialized power measurement mode on the instrument. In this mode, you tell the instrument the bandwidth of a channel you want to measure total signal power in and it automatically integrates the power spectral density over the specified range. It also automatically switches to a RMS detector, sets the attenuator and resolution bandwidth to optimal settings and so on. The manual notes that while this cannot compete with a true power meter, the relative measurement uncertainty should be under 0.5 dB.

By default, this mode doesn't do any averaging, so for a noise signal the power reading jumps around quite a bit. I turned on trace averaging to get a more stable reading, without thinking that this might do the average in log scale. After reading Lou's reply, I did some poking around the menus on the instrument and found an "Average Mode" setting that I didn't notice before. Setting it to "Power" instead of "Log" indeed made the FSV power measurement identical to what I was seeing on the USRP, rtl-sdr and my SNE-ESHTER device.

Excerpt from the R&S FSV manual about averaging mode.

So, a part of the mystery has been apparently solved. I guess the lecture here is that it pays to carefully read relevant parts of the (924 page) manual. To be honest, the chapter on power measurements does contain a pointer to the section about averaging mode and the -2.5 dB difference mentioned there would likely ring a bell.


The question still remained why the level indicator on the R&S SMBV signal generator was wrong. Assuming FSV and other devices now worked correctly, the generator wrongly increased signal level when modulation was switched on. Once I knew where to look though, the reason for this was relatively easy to find. It traces back to a software bug I made a year ago when I first started playing with the arbitrary waveform generator.

When programming a waveform into the instrument over the USB, you have to specify two values in addition to the array of I/Q samples: a RMS offset and peak offset. They are supposed to tell the instrument what is the ratio between the signal's RMS value and full range of DAC and the ratio between signal's peak and full range of DAC. I still don't know exactly why the instrument needs you to calculate that - these values are fully defined by the I/Q samples you provide and the instrument could easily calculate them itself. However, it turns out that if you provide a wrong value, the signal level will be wrong - in my case for around 2.5 dB.

The correct way to calculate them is explained in an application note:

rms = \sqrt{\frac{1}{N}\sum_{n=0}^{N-1}(I_n^2+Q_n^2)}
peak = \sqrt{\max_{n=0}^{N-1}(I_n^2+Q_n^2)}
full = 2^{15}-1
rms\_offs = 20\cdot\log\frac{full}{rms} \qquad peak\_offs = 20\cdot\log\frac{full}{peak}

It seems I initially assumed that the full range for the I/Q baseband was defined by the full range of individual DACs (the square outline on the complex plane below). In reality, it is defined by the amplitude of the complex vector (the shaded circle), which in hindsight makes more sense.

Full scale of the SMBV arbitrary waveform generator.

After correcting the calculation in my Python script, the FSV power measurements and the generator's level indicator match again. This is what the spectrum analyzer now shows for an unmodulated sine wave with -95 dBm level set on the generator:

Fixed CW signal power measurement with R&S FSV.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level.

Fixed noise signal power measurement with R&S FSV.

What is the moral of this story? I guess don't blindly trust big expensive instruments. Before someone else pointed it out I didn't even consider that the issue might be with my control measurements. I was only looking at the GNU Radio, the "cheap" SDR hardware and questioning my basic understanding of signal theory. It's not that the two instruments were not performing up to their specifications - I was merely using them in a wrong way. Considering their complexity (both have ~1000 page manuals, admittedly none of which I have read cover-to-cover) that does not seem such a remote possibility anymore.

The other was that doing silly lab measurements in the after hours can have benefits. If I was not measuring the rtl-sdr dongle out of curiosity, I wouldn't discover that I had a bug in my scripts. This discovery in fact invalidates some results that were on their way to be published in a scientific journal.

Posted by Tomaž | Categories: Analog | Comments »

Signal power in GNU Radio

11.04.2015 18:28

In my recent attempts to measure the noise figure of a rtl-sdr dongle, I've noticed that the results of the twice-power method and the Y-factor method differ significantly. In an attempt to find out the reason for this difference, I did some further measurements with different kinds of signals. I found out that the power detector I implemented in GNU Radio behaves oddly. It appears that the indicated signal power depends on the signal's crest factor, which should not be the case.

Update: As my follow-up post explains, I was using a wrong setup on both the spectrum analyzer and the signal generator.

First of all, I would like to clarify that what I'm doing here is comparing the indicated power (in relative units) for two signals of identical power. I'm not trying to determine the absolute power (say in milliwatts). As the GNU Radio FAQ succinctly explains, the latter is tricky with typical SDR equipment.

The setup for these experiments is similar to what I described in my post about noise figure: I'm using an Ezcap DVB-T dongle tuned to 700.5 MHz. I'm measuring the power in a 200 kHz band that is offset by -500 kHz from the center frequency. As far as I can see from the FFT, this band is free from spurs and other artifacts of the receiver itself. Signal power is measured by multiplying the signal with a complex conjugate of itself and then taking a moving average of 50000 samples.

Updated rtl-sdr power detector flow graph.

I'm using a Rohde & Schwarz SMBV vector signal generator that is capable of producing an arbitrary waveform with an accurate total signal power. As a control, I've also setup a FSV spectrum analyzer to measure total signal power in the same 200 kHz band as the rtl-sdr setup.

For example, this is what a spectrum analyzer shows for an unmodulated sine wave with -95 dBm level set on the generator:

R&S FSV power measurement for CW.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level:

R&S FSV power measurement for Gaussian noise.

The measured power in the 200 kHz channel in both cases agrees well with the power setting on the generator. Difference probably comes from losses in the cable (I used a 60 cm low-loss LMR-195 coax that came with the USRP), connectors, errors in calibration of both instruments and the fact the the FSV adds its own noise power to the signal. The important thing, however, is that power read-out changes only for 0.19 dB when switching on the modulation. I think this is well within the acceptable measurement error range.

Repeating the same two measurements using the rtl-sdr dongle and the GNU Radio power detector:

rtl-sdr signal power measurements for CW and noise.

Note that now the modulated signal shows much higher power than the unmodulated one. The difference is 2.53 dB, which cannot be attributed to random error.

In fact, this effect is repeatable and not specific to the rtl-sdr dongle. I've repeated the same measurements using an USRP N200 device with a SBX daughterboard. I've also used a number of different signals, from band-limited Gaussian noise, multiple CW signals to an amplitude modulated carrier.

The results are summarized in the table below. To make things clearer, I'm showing the indicated power relative to the CW. I've used -95 dBm mean power for rtl-sdr and -100 dBm for USRP, to keep the signal to noise ratio approximately the same on both devices.

Ppeak/Pmean [dB]Prtl-sdr [dB]PUSRP [dB]
CW0,000,000,00
2xCW, fd=60 kHz3,020,020,00
2xCW, fd=100 kHz3,020,040,04
3xCW, fd=60 kHz3,68-0,030,00
100% AM, fm=60 kHz6,021,201,25
Gaussian noise, BW=100 kHz10,502,552,66

As you can see, both devices show an offset for signals, that have a significant difference between peak and average powers. The offsets are also very similar between the devices, something that suggests that this effect is not caused by the device itself.

Any explanation due to physical receiver design I can imagine results in a lower gain for signals with a high peak-to-mean power ratio. So exactly the opposite of what I've seen.

It doesn't seem to be caused by some smart logic in the tuner adjusting gain for different signals. The difference in gain seems to remain down to very low signal powers. I think it is unlikely that any such optimization would work down to very low signal-to-noise levels. This also excludes any receiver non-linearity as the cause as far as I can tell.

GRC power detector response for CW and noise signals.

If I would be using an analog power detector, this kind of effect would be typical of a detector that does not measure signal power directly (like a diode detector which has an exponential characteristic instead of quadratic). However, I'm calculating signal power numerically and you can't get a more exact quadratic function than x2.

I've tested a few theories regarding numerical errors. In fact, results do differ somewhat between the moving average or the decimating low-pass filter. They also differ between using conjugate and multiply blocks or the RMS block. However, the differences are insignificant as far as I can see and don't explain the measurements. I've chosen the flow graph setup shown above because it produces figures that are closest to an identical calculation done in NumPy. Numerical errors also don't explain why the same flow graph produces valid results for a receiver simulated with signal and noise source blocks.

So far I'm out of ideas what could be causing this.

Posted by Tomaž | Categories: Analog | Comments »

Notes on HB9AJG's E4000 sensitivity measurements

03.04.2015 20:28

In August 2013, a ham operator with the HB9AJG call sign posted a detailed report on measurements done with two rtl-sdr dongles to the SDRSharp Yahoo group. They used laboratory instruments to evaluate many aspects of these cheap software-defined radio receivers. As lab reports go, this one is very detailed and contains all the information necessary for anyone with sufficient equipment to replicate the results. The author certainly deserves praise for being so diligent.

In my previous blog post, I mentioned that my own measurements of the noise figure of a similar rtl-sdr dongle disagree with HB9AJG's report. My Ezcap DVB-T dongle is using the same Elonics E4000 integrated tuner as the Terratec dongle tested by HB9AJG. While this does not necessarily mean that the two devices should perform identically, it did prompt me to look closely at HB9AJG's sensitivity measurements. During that I believe I have found two errors in the report, which I want to discuss in the following.

Remark 1: The dongles have a nominal input impedance of 75 Ohms, whereas my signal generators have output impedances of 50 Ohms. My dBm figures take account of the difference of 1.6dB.

The first odd thing I noticed about the report is this correction for the mismatch between the signal generator's output impedance and the dongle's input impedance. I'm not sure where the 1.6 dB figure comes from.

If we assume the source and load impedances above, the mismatch correction should be:

\Gamma = \frac{Z_l - Z_s}{Z_l + Z_s} = \frac{75\Omega - 50\Omega}{75\Omega + 50\Omega} = 0.2
ML = (1 - \Gamma^2) = 0.96
ML_{dB} = 0.18 \mathrm{dB}

0.18 dB is small enough to be insignificant compared to the other measurement errors and I ignored mismatch loss completely in my noise figure calculations.

I don't actually know the input impedance of my dongle. 75 Ω seems a fair guess as that is the standard for TV receivers. The E4000 datasheet specifies an even lower loss of around 0.14 dB for a 50 Ω source (see Input Return loss (50R system) on page 11). Of course, the dongle might have some additional matching network in front of the tuner and I don't currently have the equipment at hand to measure the mismatch loss directly.

It might be that 1.6 dB figure was measured by HB9AJG. If in fact these tuners are so badly matched, then my measurements overestimate the noise figure for a similar amount. For the purpose of comparing my results with HB9ALG's however, I have removed this compensation from their figures.

Update: My father points out that signal amplitude on a 75 Ω load in a 50 Ω system is in fact 1.6 dB higher than on a 50 Ω load. I was wrongly considering signal power correction. It is in fact the amplitude of the signal entering the receiver that matters, not the power. In that aspect, HB9AJG's correction was accurate. On the other hand, an Agilent application note pointed out by David in a comment to my previous post shows that accounting for mismatch is not that simple.

10log(Bandwidth) in my measurements is 10log(500) = 27dB

My second problem with the report is connected with the bandwidth of the measurement. To calculate the noise figure from the minimum discernible signal (MDS), measurement bandwidth must be accurately known. Any error in bandwidth directly translates to noise figure error. HB9AJG used the SDR# software and the report says that they used a 500 Hz filter for wireless (CW) telegraphy in their MDS measurements.

I replicated their measurements in SDR# using the same filter settings and it appears to me that the 500 Hz filter is in fact narrower than 500 Hz. I should mention however that I used version 1.0.0.1333 instead of 1.0.0.135 and my version has a Filter Audio check box that the report doesn't mention. It seems to affect the final bandwidth somewhat and I left it turned on.

SDR# showing audio spectrum with 500 Hz filter enabled.

I believe the actual filter bandwidth in my case is around 190 Hz. This estimate is based on the audio spectrum curve shown on the SDR# screenshot above. The curve shows the spectrum of noise shaped by the audio filter. Since noise has a flat spectrum, this curve should be similar to the shape of the filter gain itself.

Calculating the gain-bandwidth product of the filter.

A trace of the spectrum is shown in linear scale on the graph above. A perfect square filter with 190 Hz bandwidth (lightly shaded area on the graph) has the same gain-bandwidth product as the traced line. In log scale this is equivalent to 22.8 dB.

Finally, if I take both of these corrections and apply them to the MDS measurements for 700 MHz from HB9AJG's report, the noise figure comes out as:

NF = -136\mathrm{dB} + 1.6\mathrm{dB} + 174\mathrm{dB} - 10\log\frac{190\mathrm{Hz}}{1\mathrm{Hz}}
NF = 16.8 \mathrm{dB}

This result is reasonably close to my result of 17.0 dB for the twice-power method.

Update: 1.6 dB figure has the wrong sign in the equation above, since it is due to higher signal amplitude, not lower power as I initially thought. To cancel HB9AJG's correction, it should subtracted, giving NF = 13.6 dB.

Of course, you can argue that this exercise is all about fudging with the data until it fits the theory you want to prove. I think it shows that noise measurements are tricky and there's a lot things you can overlook even if you're careful. The fact that this came out close to my own result just makes me more confident that what I measured has some connection with reality.

Posted by Tomaž | Categories: Analog | Comments »